00:00:00.001 Started by upstream project "autotest-per-patch" build number 132519 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:07.119 The recommended git tool is: git 00:00:07.119 using credential 00000000-0000-0000-0000-000000000002 00:00:07.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.134 Fetching changes from the remote Git repository 00:00:07.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.154 Using shallow fetch with depth 1 00:00:07.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.154 > git --version # timeout=10 00:00:07.165 > git --version # 'git version 2.39.2' 00:00:07.165 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.175 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.175 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.742 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.753 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.765 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.765 > git config core.sparsecheckout # timeout=10 00:00:13.777 > git read-tree -mu HEAD # timeout=10 00:00:13.792 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.818 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.819 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.906 [Pipeline] Start of Pipeline 00:00:13.919 [Pipeline] library 00:00:13.920 Loading library shm_lib@master 00:00:13.920 Library shm_lib@master is cached. Copying from home. 00:00:13.935 [Pipeline] node 00:00:13.943 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:13.945 [Pipeline] { 00:00:13.955 [Pipeline] catchError 00:00:13.956 [Pipeline] { 00:00:13.966 [Pipeline] wrap 00:00:13.972 [Pipeline] { 00:00:13.978 [Pipeline] stage 00:00:13.979 [Pipeline] { (Prologue) 00:00:14.162 [Pipeline] sh 00:00:14.451 + logger -p user.info -t JENKINS-CI 00:00:14.477 [Pipeline] echo 00:00:14.480 Node: CYP9 00:00:14.489 [Pipeline] sh 00:00:14.802 [Pipeline] setCustomBuildProperty 00:00:14.817 [Pipeline] echo 00:00:14.819 Cleanup processes 00:00:14.825 [Pipeline] sh 00:00:15.117 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.117 1129418 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.133 [Pipeline] sh 00:00:15.519 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:15.519 ++ grep -v 'sudo pgrep' 00:00:15.519 ++ awk '{print $1}' 00:00:15.519 + sudo kill -9 00:00:15.519 + true 00:00:15.536 [Pipeline] cleanWs 00:00:15.546 [WS-CLEANUP] Deleting project workspace... 00:00:15.546 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.554 [WS-CLEANUP] done 00:00:15.558 [Pipeline] setCustomBuildProperty 00:00:15.574 [Pipeline] sh 00:00:15.861 + sudo git config --global --replace-all safe.directory '*' 00:00:15.959 [Pipeline] httpRequest 00:00:16.415 [Pipeline] echo 00:00:16.416 Sorcerer 10.211.164.20 is alive 00:00:16.423 [Pipeline] retry 00:00:16.425 [Pipeline] { 00:00:16.438 [Pipeline] httpRequest 00:00:16.443 HttpMethod: GET 00:00:16.443 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.444 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.459 Response Code: HTTP/1.1 200 OK 00:00:16.459 Success: Status code 200 is in the accepted range: 200,404 00:00:16.460 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.661 [Pipeline] } 00:00:20.672 [Pipeline] // retry 00:00:20.678 [Pipeline] sh 00:00:20.963 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.983 [Pipeline] httpRequest 00:00:21.355 [Pipeline] echo 00:00:21.356 Sorcerer 10.211.164.20 is alive 00:00:21.364 [Pipeline] retry 00:00:21.366 [Pipeline] { 00:00:21.377 [Pipeline] httpRequest 00:00:21.382 HttpMethod: GET 00:00:21.382 URL: http://10.211.164.20/packages/spdk_9ebbe7008a613e6114d16afbfb7753698ae9c76b.tar.gz 00:00:21.383 Sending request to url: http://10.211.164.20/packages/spdk_9ebbe7008a613e6114d16afbfb7753698ae9c76b.tar.gz 00:00:21.399 Response Code: HTTP/1.1 200 OK 00:00:21.399 Success: Status code 200 is in the accepted range: 200,404 00:00:21.400 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9ebbe7008a613e6114d16afbfb7753698ae9c76b.tar.gz 00:03:33.582 [Pipeline] } 00:03:33.595 [Pipeline] // retry 00:03:33.601 [Pipeline] sh 00:03:33.889 + tar --no-same-owner -xf spdk_9ebbe7008a613e6114d16afbfb7753698ae9c76b.tar.gz 00:03:37.206 [Pipeline] sh 00:03:37.496 + git -C spdk log --oneline -n5 00:03:37.496 9ebbe7008 blob: fix possible memory leak in bs loading 00:03:37.496 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:03:37.496 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:03:37.496 9a6847636 bdev/nvme: Fix spdk_bdev_nvme_create() 00:03:37.496 8bbc7b697 nvmf: Block ctrlr-only admin cmds if NSID is set 00:03:37.509 [Pipeline] } 00:03:37.523 [Pipeline] // stage 00:03:37.531 [Pipeline] stage 00:03:37.534 [Pipeline] { (Prepare) 00:03:37.549 [Pipeline] writeFile 00:03:37.564 [Pipeline] sh 00:03:37.852 + logger -p user.info -t JENKINS-CI 00:03:37.866 [Pipeline] sh 00:03:38.155 + logger -p user.info -t JENKINS-CI 00:03:38.169 [Pipeline] sh 00:03:38.459 + cat autorun-spdk.conf 00:03:38.459 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:38.459 SPDK_TEST_NVMF=1 00:03:38.459 SPDK_TEST_NVME_CLI=1 00:03:38.459 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:38.459 SPDK_TEST_NVMF_NICS=e810 00:03:38.459 SPDK_TEST_VFIOUSER=1 00:03:38.459 SPDK_RUN_UBSAN=1 00:03:38.459 NET_TYPE=phy 00:03:38.468 RUN_NIGHTLY=0 00:03:38.472 [Pipeline] readFile 00:03:38.497 [Pipeline] withEnv 00:03:38.499 [Pipeline] { 00:03:38.513 [Pipeline] sh 00:03:38.809 + set -ex 00:03:38.809 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:38.809 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:38.809 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:38.809 ++ SPDK_TEST_NVMF=1 00:03:38.809 ++ SPDK_TEST_NVME_CLI=1 00:03:38.809 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:38.809 ++ SPDK_TEST_NVMF_NICS=e810 00:03:38.809 ++ SPDK_TEST_VFIOUSER=1 00:03:38.809 ++ SPDK_RUN_UBSAN=1 00:03:38.809 ++ NET_TYPE=phy 00:03:38.809 ++ RUN_NIGHTLY=0 00:03:38.809 + case $SPDK_TEST_NVMF_NICS in 00:03:38.809 + DRIVERS=ice 00:03:38.809 + [[ tcp == \r\d\m\a ]] 00:03:38.809 + [[ -n ice ]] 00:03:38.809 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:38.809 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:38.809 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:38.809 rmmod: ERROR: Module irdma is not currently loaded 00:03:38.809 rmmod: ERROR: Module i40iw is not currently loaded 00:03:38.809 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:38.809 + true 00:03:38.809 + for D in $DRIVERS 00:03:38.809 + sudo modprobe ice 00:03:38.809 + exit 0 00:03:38.820 [Pipeline] } 00:03:38.833 [Pipeline] // withEnv 00:03:38.837 [Pipeline] } 00:03:38.849 [Pipeline] // stage 00:03:38.856 [Pipeline] catchError 00:03:38.857 [Pipeline] { 00:03:38.870 [Pipeline] timeout 00:03:38.871 Timeout set to expire in 1 hr 0 min 00:03:38.872 [Pipeline] { 00:03:38.883 [Pipeline] stage 00:03:38.885 [Pipeline] { (Tests) 00:03:38.894 [Pipeline] sh 00:03:39.179 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:39.179 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:39.179 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:39.179 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:39.179 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:39.179 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:39.179 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:39.179 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:39.179 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:39.179 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:39.179 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:39.179 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:39.179 + source /etc/os-release 00:03:39.179 ++ NAME='Fedora Linux' 00:03:39.179 ++ VERSION='39 (Cloud Edition)' 00:03:39.179 ++ ID=fedora 00:03:39.179 ++ VERSION_ID=39 00:03:39.179 ++ VERSION_CODENAME= 00:03:39.179 ++ PLATFORM_ID=platform:f39 00:03:39.179 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:39.179 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:39.179 ++ LOGO=fedora-logo-icon 00:03:39.179 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:39.179 ++ HOME_URL=https://fedoraproject.org/ 00:03:39.179 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:39.179 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:39.179 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:39.179 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:39.179 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:39.179 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:39.179 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:39.179 ++ SUPPORT_END=2024-11-12 00:03:39.179 ++ VARIANT='Cloud Edition' 00:03:39.179 ++ VARIANT_ID=cloud 00:03:39.179 + uname -a 00:03:39.179 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:39.179 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:42.485 Hugepages 00:03:42.485 node hugesize free / total 00:03:42.485 node0 1048576kB 0 / 0 00:03:42.485 node0 2048kB 0 / 0 00:03:42.485 node1 1048576kB 0 / 0 00:03:42.485 node1 2048kB 0 / 0 00:03:42.485 00:03:42.485 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:42.485 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:42.485 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:42.485 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:42.485 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:42.485 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:42.485 + rm -f /tmp/spdk-ld-path 00:03:42.485 + source autorun-spdk.conf 00:03:42.485 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.485 ++ SPDK_TEST_NVMF=1 00:03:42.485 ++ SPDK_TEST_NVME_CLI=1 00:03:42.485 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:42.485 ++ SPDK_TEST_NVMF_NICS=e810 00:03:42.485 ++ SPDK_TEST_VFIOUSER=1 00:03:42.485 ++ SPDK_RUN_UBSAN=1 00:03:42.485 ++ NET_TYPE=phy 00:03:42.485 ++ RUN_NIGHTLY=0 00:03:42.485 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:42.485 + [[ -n '' ]] 00:03:42.485 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.485 + for M in /var/spdk/build-*-manifest.txt 00:03:42.485 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:42.485 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:42.485 + for M in /var/spdk/build-*-manifest.txt 00:03:42.485 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:42.485 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:42.485 + for M in /var/spdk/build-*-manifest.txt 00:03:42.485 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:42.485 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:42.485 ++ uname 00:03:42.485 + [[ Linux == \L\i\n\u\x ]] 00:03:42.485 + sudo dmesg -T 00:03:42.485 + sudo dmesg --clear 00:03:42.485 + dmesg_pid=1130976 00:03:42.485 + [[ Fedora Linux == FreeBSD ]] 00:03:42.485 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:42.485 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:42.485 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:42.485 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:42.485 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:42.485 + [[ -x /usr/src/fio-static/fio ]] 00:03:42.485 + export FIO_BIN=/usr/src/fio-static/fio 00:03:42.485 + FIO_BIN=/usr/src/fio-static/fio 00:03:42.485 + sudo dmesg -Tw 00:03:42.485 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:42.485 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:42.485 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:42.485 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:42.485 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:42.485 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:42.485 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:42.485 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:42.485 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:42.485 07:13:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:42.485 07:13:10 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:42.485 07:13:10 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:42.485 07:13:10 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:42.485 07:13:10 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:42.748 07:13:10 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:42.748 07:13:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.748 07:13:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:42.748 07:13:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:42.748 07:13:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.748 07:13:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.748 07:13:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.748 07:13:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.748 07:13:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.748 07:13:10 -- paths/export.sh@5 -- $ export PATH 00:03:42.748 07:13:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.748 07:13:10 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:42.748 07:13:10 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:42.748 07:13:10 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732601590.XXXXXX 00:03:42.748 07:13:10 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732601590.vd02XX 00:03:42.748 07:13:10 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:42.748 07:13:10 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:42.748 07:13:10 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:42.748 07:13:10 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:42.748 07:13:10 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:42.748 07:13:10 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:42.748 07:13:10 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:42.748 07:13:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:42.748 07:13:10 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:42.748 07:13:10 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:42.748 07:13:10 -- pm/common@17 -- $ local monitor 00:03:42.748 07:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.748 07:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.748 07:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.748 07:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.748 07:13:10 -- pm/common@21 -- $ date +%s 00:03:42.748 07:13:10 -- pm/common@21 -- $ date +%s 00:03:42.748 07:13:10 -- pm/common@25 -- $ sleep 1 00:03:42.748 07:13:10 -- pm/common@21 -- $ date +%s 00:03:42.748 07:13:10 -- pm/common@21 -- $ date +%s 00:03:42.748 07:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601590 00:03:42.748 07:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601590 00:03:42.748 07:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601590 00:03:42.748 07:13:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732601590 00:03:42.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601590_collect-cpu-load.pm.log 00:03:42.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601590_collect-vmstat.pm.log 00:03:42.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601590_collect-cpu-temp.pm.log 00:03:42.748 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732601590_collect-bmc-pm.bmc.pm.log 00:03:43.691 07:13:11 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:43.691 07:13:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:43.691 07:13:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:43.691 07:13:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.691 07:13:11 -- spdk/autobuild.sh@16 -- $ date -u 00:03:43.691 Tue Nov 26 06:13:11 AM UTC 2024 00:03:43.691 07:13:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:43.691 v25.01-pre-237-g9ebbe7008 00:03:43.691 07:13:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:43.691 07:13:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:43.691 07:13:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:43.691 07:13:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:43.691 07:13:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:43.691 07:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.691 ************************************ 00:03:43.691 START TEST ubsan 00:03:43.691 ************************************ 00:03:43.691 07:13:11 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:43.691 using ubsan 00:03:43.691 00:03:43.691 real 0m0.001s 00:03:43.691 user 0m0.001s 00:03:43.691 sys 0m0.000s 00:03:43.691 07:13:11 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.691 07:13:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:43.691 ************************************ 00:03:43.691 END TEST ubsan 00:03:43.691 ************************************ 00:03:43.951 07:13:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:43.951 07:13:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:43.951 07:13:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:43.951 07:13:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:43.951 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:43.951 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:44.521 Using 'verbs' RDMA provider 00:04:00.375 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:12.612 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:13.447 Creating mk/config.mk...done. 00:04:13.447 Creating mk/cc.flags.mk...done. 00:04:13.447 Type 'make' to build. 00:04:13.447 07:13:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:13.447 07:13:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:13.447 07:13:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:13.447 07:13:41 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.447 ************************************ 00:04:13.447 START TEST make 00:04:13.447 ************************************ 00:04:13.447 07:13:41 make -- common/autotest_common.sh@1129 -- $ make -j144 00:04:13.709 make[1]: Nothing to be done for 'all'. 00:04:15.102 The Meson build system 00:04:15.102 Version: 1.5.0 00:04:15.102 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:15.102 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:15.102 Build type: native build 00:04:15.102 Project name: libvfio-user 00:04:15.102 Project version: 0.0.1 00:04:15.102 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:15.102 C linker for the host machine: cc ld.bfd 2.40-14 00:04:15.102 Host machine cpu family: x86_64 00:04:15.102 Host machine cpu: x86_64 00:04:15.102 Run-time dependency threads found: YES 00:04:15.102 Library dl found: YES 00:04:15.102 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:15.102 Run-time dependency json-c found: YES 0.17 00:04:15.102 Run-time dependency cmocka found: YES 1.1.7 00:04:15.102 Program pytest-3 found: NO 00:04:15.102 Program flake8 found: NO 00:04:15.102 Program misspell-fixer found: NO 00:04:15.102 Program restructuredtext-lint found: NO 00:04:15.102 Program valgrind found: YES (/usr/bin/valgrind) 00:04:15.102 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:15.102 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:15.102 Compiler for C supports arguments -Wwrite-strings: YES 00:04:15.102 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:15.102 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:15.102 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:15.102 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:15.102 Build targets in project: 8 00:04:15.102 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:15.102 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:15.102 00:04:15.102 libvfio-user 0.0.1 00:04:15.102 00:04:15.102 User defined options 00:04:15.102 buildtype : debug 00:04:15.102 default_library: shared 00:04:15.102 libdir : /usr/local/lib 00:04:15.102 00:04:15.102 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:15.675 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:15.675 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:15.675 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:15.675 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:15.675 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:15.675 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:15.675 [6/37] Compiling C object samples/null.p/null.c.o 00:04:15.675 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:15.675 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:15.675 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:15.675 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:15.675 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:15.675 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:15.675 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:15.675 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:15.675 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:15.675 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:15.675 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:15.675 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:15.675 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:15.675 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:15.675 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:15.675 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:15.675 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:15.675 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:15.675 [25/37] Compiling C object samples/server.p/server.c.o 00:04:15.937 [26/37] Compiling C object samples/client.p/client.c.o 00:04:15.937 [27/37] Linking target samples/client 00:04:15.937 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:15.937 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:15.937 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:04:15.937 [31/37] Linking target test/unit_tests 00:04:15.937 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:16.199 [33/37] Linking target samples/null 00:04:16.199 [34/37] Linking target samples/shadow_ioeventfd_server 00:04:16.199 [35/37] Linking target samples/server 00:04:16.199 [36/37] Linking target samples/gpio-pci-idio-16 00:04:16.199 [37/37] Linking target samples/lspci 00:04:16.199 INFO: autodetecting backend as ninja 00:04:16.199 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:16.199 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:16.459 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:16.459 ninja: no work to do. 00:04:23.114 The Meson build system 00:04:23.114 Version: 1.5.0 00:04:23.114 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:23.114 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:23.114 Build type: native build 00:04:23.114 Program cat found: YES (/usr/bin/cat) 00:04:23.114 Project name: DPDK 00:04:23.114 Project version: 24.03.0 00:04:23.114 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:23.114 C linker for the host machine: cc ld.bfd 2.40-14 00:04:23.114 Host machine cpu family: x86_64 00:04:23.114 Host machine cpu: x86_64 00:04:23.114 Message: ## Building in Developer Mode ## 00:04:23.114 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:23.114 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:23.114 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:23.114 Program python3 found: YES (/usr/bin/python3) 00:04:23.114 Program cat found: YES (/usr/bin/cat) 00:04:23.114 Compiler for C supports arguments -march=native: YES 00:04:23.114 Checking for size of "void *" : 8 00:04:23.114 Checking for size of "void *" : 8 (cached) 00:04:23.114 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:23.114 Library m found: YES 00:04:23.114 Library numa found: YES 00:04:23.114 Has header "numaif.h" : YES 00:04:23.114 Library fdt found: NO 00:04:23.114 Library execinfo found: NO 00:04:23.114 Has header "execinfo.h" : YES 00:04:23.114 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:23.114 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:23.114 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:23.114 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:23.114 Run-time dependency openssl found: YES 3.1.1 00:04:23.114 Run-time dependency libpcap found: YES 1.10.4 00:04:23.114 Has header "pcap.h" with dependency libpcap: YES 00:04:23.114 Compiler for C supports arguments -Wcast-qual: YES 00:04:23.114 Compiler for C supports arguments -Wdeprecated: YES 00:04:23.114 Compiler for C supports arguments -Wformat: YES 00:04:23.114 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:23.114 Compiler for C supports arguments -Wformat-security: NO 00:04:23.114 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:23.114 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:23.114 Compiler for C supports arguments -Wnested-externs: YES 00:04:23.114 Compiler for C supports arguments -Wold-style-definition: YES 00:04:23.114 Compiler for C supports arguments -Wpointer-arith: YES 00:04:23.114 Compiler for C supports arguments -Wsign-compare: YES 00:04:23.114 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:23.114 Compiler for C supports arguments -Wundef: YES 00:04:23.114 Compiler for C supports arguments -Wwrite-strings: YES 00:04:23.114 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:23.114 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:23.114 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:23.114 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:23.114 Program objdump found: YES (/usr/bin/objdump) 00:04:23.114 Compiler for C supports arguments -mavx512f: YES 00:04:23.114 Checking if "AVX512 checking" compiles: YES 00:04:23.114 Fetching value of define "__SSE4_2__" : 1 00:04:23.114 Fetching value of define "__AES__" : 1 00:04:23.114 Fetching value of define "__AVX__" : 1 00:04:23.114 Fetching value of define "__AVX2__" : 1 00:04:23.114 Fetching value of define "__AVX512BW__" : 1 00:04:23.114 Fetching value of define "__AVX512CD__" : 1 00:04:23.114 Fetching value of define "__AVX512DQ__" : 1 00:04:23.114 Fetching value of define "__AVX512F__" : 1 00:04:23.114 Fetching value of define "__AVX512VL__" : 1 00:04:23.114 Fetching value of define "__PCLMUL__" : 1 00:04:23.114 Fetching value of define "__RDRND__" : 1 00:04:23.114 Fetching value of define "__RDSEED__" : 1 00:04:23.114 Fetching value of define "__VPCLMULQDQ__" : 1 00:04:23.114 Fetching value of define "__znver1__" : (undefined) 00:04:23.114 Fetching value of define "__znver2__" : (undefined) 00:04:23.114 Fetching value of define "__znver3__" : (undefined) 00:04:23.114 Fetching value of define "__znver4__" : (undefined) 00:04:23.114 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:23.114 Message: lib/log: Defining dependency "log" 00:04:23.114 Message: lib/kvargs: Defining dependency "kvargs" 00:04:23.114 Message: lib/telemetry: Defining dependency "telemetry" 00:04:23.114 Checking for function "getentropy" : NO 00:04:23.114 Message: lib/eal: Defining dependency "eal" 00:04:23.114 Message: lib/ring: Defining dependency "ring" 00:04:23.114 Message: lib/rcu: Defining dependency "rcu" 00:04:23.114 Message: lib/mempool: Defining dependency "mempool" 00:04:23.114 Message: lib/mbuf: Defining dependency "mbuf" 00:04:23.114 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:23.114 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:23.114 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:23.114 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:23.114 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:23.114 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:04:23.114 Compiler for C supports arguments -mpclmul: YES 00:04:23.114 Compiler for C supports arguments -maes: YES 00:04:23.114 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:23.114 Compiler for C supports arguments -mavx512bw: YES 00:04:23.114 Compiler for C supports arguments -mavx512dq: YES 00:04:23.114 Compiler for C supports arguments -mavx512vl: YES 00:04:23.114 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:23.114 Compiler for C supports arguments -mavx2: YES 00:04:23.114 Compiler for C supports arguments -mavx: YES 00:04:23.114 Message: lib/net: Defining dependency "net" 00:04:23.114 Message: lib/meter: Defining dependency "meter" 00:04:23.114 Message: lib/ethdev: Defining dependency "ethdev" 00:04:23.114 Message: lib/pci: Defining dependency "pci" 00:04:23.114 Message: lib/cmdline: Defining dependency "cmdline" 00:04:23.114 Message: lib/hash: Defining dependency "hash" 00:04:23.114 Message: lib/timer: Defining dependency "timer" 00:04:23.114 Message: lib/compressdev: Defining dependency "compressdev" 00:04:23.114 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:23.114 Message: lib/dmadev: Defining dependency "dmadev" 00:04:23.114 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:23.114 Message: lib/power: Defining dependency "power" 00:04:23.114 Message: lib/reorder: Defining dependency "reorder" 00:04:23.114 Message: lib/security: Defining dependency "security" 00:04:23.114 Has header "linux/userfaultfd.h" : YES 00:04:23.114 Has header "linux/vduse.h" : YES 00:04:23.114 Message: lib/vhost: Defining dependency "vhost" 00:04:23.114 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:23.114 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:23.114 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:23.114 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:23.114 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:23.114 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:23.114 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:23.114 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:23.114 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:23.114 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:23.114 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:23.114 Configuring doxy-api-html.conf using configuration 00:04:23.114 Configuring doxy-api-man.conf using configuration 00:04:23.114 Program mandb found: YES (/usr/bin/mandb) 00:04:23.114 Program sphinx-build found: NO 00:04:23.114 Configuring rte_build_config.h using configuration 00:04:23.114 Message: 00:04:23.114 ================= 00:04:23.114 Applications Enabled 00:04:23.114 ================= 00:04:23.114 00:04:23.114 apps: 00:04:23.114 00:04:23.114 00:04:23.114 Message: 00:04:23.114 ================= 00:04:23.114 Libraries Enabled 00:04:23.114 ================= 00:04:23.114 00:04:23.114 libs: 00:04:23.114 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:23.114 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:23.114 cryptodev, dmadev, power, reorder, security, vhost, 00:04:23.114 00:04:23.114 Message: 00:04:23.114 =============== 00:04:23.114 Drivers Enabled 00:04:23.114 =============== 00:04:23.114 00:04:23.114 common: 00:04:23.114 00:04:23.114 bus: 00:04:23.114 pci, vdev, 00:04:23.114 mempool: 00:04:23.114 ring, 00:04:23.114 dma: 00:04:23.114 00:04:23.114 net: 00:04:23.114 00:04:23.114 crypto: 00:04:23.114 00:04:23.114 compress: 00:04:23.114 00:04:23.114 vdpa: 00:04:23.114 00:04:23.114 00:04:23.114 Message: 00:04:23.115 ================= 00:04:23.115 Content Skipped 00:04:23.115 ================= 00:04:23.115 00:04:23.115 apps: 00:04:23.115 dumpcap: explicitly disabled via build config 00:04:23.115 graph: explicitly disabled via build config 00:04:23.115 pdump: explicitly disabled via build config 00:04:23.115 proc-info: explicitly disabled via build config 00:04:23.115 test-acl: explicitly disabled via build config 00:04:23.115 test-bbdev: explicitly disabled via build config 00:04:23.115 test-cmdline: explicitly disabled via build config 00:04:23.115 test-compress-perf: explicitly disabled via build config 00:04:23.115 test-crypto-perf: explicitly disabled via build config 00:04:23.115 test-dma-perf: explicitly disabled via build config 00:04:23.115 test-eventdev: explicitly disabled via build config 00:04:23.115 test-fib: explicitly disabled via build config 00:04:23.115 test-flow-perf: explicitly disabled via build config 00:04:23.115 test-gpudev: explicitly disabled via build config 00:04:23.115 test-mldev: explicitly disabled via build config 00:04:23.115 test-pipeline: explicitly disabled via build config 00:04:23.115 test-pmd: explicitly disabled via build config 00:04:23.115 test-regex: explicitly disabled via build config 00:04:23.115 test-sad: explicitly disabled via build config 00:04:23.115 test-security-perf: explicitly disabled via build config 00:04:23.115 00:04:23.115 libs: 00:04:23.115 argparse: explicitly disabled via build config 00:04:23.115 metrics: explicitly disabled via build config 00:04:23.115 acl: explicitly disabled via build config 00:04:23.115 bbdev: explicitly disabled via build config 00:04:23.115 bitratestats: explicitly disabled via build config 00:04:23.115 bpf: explicitly disabled via build config 00:04:23.115 cfgfile: explicitly disabled via build config 00:04:23.115 distributor: explicitly disabled via build config 00:04:23.115 efd: explicitly disabled via build config 00:04:23.115 eventdev: explicitly disabled via build config 00:04:23.115 dispatcher: explicitly disabled via build config 00:04:23.115 gpudev: explicitly disabled via build config 00:04:23.115 gro: explicitly disabled via build config 00:04:23.115 gso: explicitly disabled via build config 00:04:23.115 ip_frag: explicitly disabled via build config 00:04:23.115 jobstats: explicitly disabled via build config 00:04:23.115 latencystats: explicitly disabled via build config 00:04:23.115 lpm: explicitly disabled via build config 00:04:23.115 member: explicitly disabled via build config 00:04:23.115 pcapng: explicitly disabled via build config 00:04:23.115 rawdev: explicitly disabled via build config 00:04:23.115 regexdev: explicitly disabled via build config 00:04:23.115 mldev: explicitly disabled via build config 00:04:23.115 rib: explicitly disabled via build config 00:04:23.115 sched: explicitly disabled via build config 00:04:23.115 stack: explicitly disabled via build config 00:04:23.115 ipsec: explicitly disabled via build config 00:04:23.115 pdcp: explicitly disabled via build config 00:04:23.115 fib: explicitly disabled via build config 00:04:23.115 port: explicitly disabled via build config 00:04:23.115 pdump: explicitly disabled via build config 00:04:23.115 table: explicitly disabled via build config 00:04:23.115 pipeline: explicitly disabled via build config 00:04:23.115 graph: explicitly disabled via build config 00:04:23.115 node: explicitly disabled via build config 00:04:23.115 00:04:23.115 drivers: 00:04:23.115 common/cpt: not in enabled drivers build config 00:04:23.115 common/dpaax: not in enabled drivers build config 00:04:23.115 common/iavf: not in enabled drivers build config 00:04:23.115 common/idpf: not in enabled drivers build config 00:04:23.115 common/ionic: not in enabled drivers build config 00:04:23.115 common/mvep: not in enabled drivers build config 00:04:23.115 common/octeontx: not in enabled drivers build config 00:04:23.115 bus/auxiliary: not in enabled drivers build config 00:04:23.115 bus/cdx: not in enabled drivers build config 00:04:23.115 bus/dpaa: not in enabled drivers build config 00:04:23.115 bus/fslmc: not in enabled drivers build config 00:04:23.115 bus/ifpga: not in enabled drivers build config 00:04:23.115 bus/platform: not in enabled drivers build config 00:04:23.115 bus/uacce: not in enabled drivers build config 00:04:23.115 bus/vmbus: not in enabled drivers build config 00:04:23.115 common/cnxk: not in enabled drivers build config 00:04:23.115 common/mlx5: not in enabled drivers build config 00:04:23.115 common/nfp: not in enabled drivers build config 00:04:23.115 common/nitrox: not in enabled drivers build config 00:04:23.115 common/qat: not in enabled drivers build config 00:04:23.115 common/sfc_efx: not in enabled drivers build config 00:04:23.115 mempool/bucket: not in enabled drivers build config 00:04:23.115 mempool/cnxk: not in enabled drivers build config 00:04:23.115 mempool/dpaa: not in enabled drivers build config 00:04:23.115 mempool/dpaa2: not in enabled drivers build config 00:04:23.115 mempool/octeontx: not in enabled drivers build config 00:04:23.115 mempool/stack: not in enabled drivers build config 00:04:23.115 dma/cnxk: not in enabled drivers build config 00:04:23.115 dma/dpaa: not in enabled drivers build config 00:04:23.115 dma/dpaa2: not in enabled drivers build config 00:04:23.115 dma/hisilicon: not in enabled drivers build config 00:04:23.115 dma/idxd: not in enabled drivers build config 00:04:23.115 dma/ioat: not in enabled drivers build config 00:04:23.115 dma/skeleton: not in enabled drivers build config 00:04:23.115 net/af_packet: not in enabled drivers build config 00:04:23.115 net/af_xdp: not in enabled drivers build config 00:04:23.115 net/ark: not in enabled drivers build config 00:04:23.115 net/atlantic: not in enabled drivers build config 00:04:23.115 net/avp: not in enabled drivers build config 00:04:23.115 net/axgbe: not in enabled drivers build config 00:04:23.115 net/bnx2x: not in enabled drivers build config 00:04:23.115 net/bnxt: not in enabled drivers build config 00:04:23.115 net/bonding: not in enabled drivers build config 00:04:23.115 net/cnxk: not in enabled drivers build config 00:04:23.115 net/cpfl: not in enabled drivers build config 00:04:23.115 net/cxgbe: not in enabled drivers build config 00:04:23.115 net/dpaa: not in enabled drivers build config 00:04:23.115 net/dpaa2: not in enabled drivers build config 00:04:23.115 net/e1000: not in enabled drivers build config 00:04:23.115 net/ena: not in enabled drivers build config 00:04:23.115 net/enetc: not in enabled drivers build config 00:04:23.115 net/enetfec: not in enabled drivers build config 00:04:23.115 net/enic: not in enabled drivers build config 00:04:23.115 net/failsafe: not in enabled drivers build config 00:04:23.115 net/fm10k: not in enabled drivers build config 00:04:23.115 net/gve: not in enabled drivers build config 00:04:23.115 net/hinic: not in enabled drivers build config 00:04:23.115 net/hns3: not in enabled drivers build config 00:04:23.115 net/i40e: not in enabled drivers build config 00:04:23.115 net/iavf: not in enabled drivers build config 00:04:23.115 net/ice: not in enabled drivers build config 00:04:23.115 net/idpf: not in enabled drivers build config 00:04:23.115 net/igc: not in enabled drivers build config 00:04:23.115 net/ionic: not in enabled drivers build config 00:04:23.115 net/ipn3ke: not in enabled drivers build config 00:04:23.115 net/ixgbe: not in enabled drivers build config 00:04:23.115 net/mana: not in enabled drivers build config 00:04:23.115 net/memif: not in enabled drivers build config 00:04:23.115 net/mlx4: not in enabled drivers build config 00:04:23.115 net/mlx5: not in enabled drivers build config 00:04:23.115 net/mvneta: not in enabled drivers build config 00:04:23.115 net/mvpp2: not in enabled drivers build config 00:04:23.115 net/netvsc: not in enabled drivers build config 00:04:23.115 net/nfb: not in enabled drivers build config 00:04:23.115 net/nfp: not in enabled drivers build config 00:04:23.115 net/ngbe: not in enabled drivers build config 00:04:23.115 net/null: not in enabled drivers build config 00:04:23.115 net/octeontx: not in enabled drivers build config 00:04:23.115 net/octeon_ep: not in enabled drivers build config 00:04:23.115 net/pcap: not in enabled drivers build config 00:04:23.115 net/pfe: not in enabled drivers build config 00:04:23.115 net/qede: not in enabled drivers build config 00:04:23.115 net/ring: not in enabled drivers build config 00:04:23.115 net/sfc: not in enabled drivers build config 00:04:23.115 net/softnic: not in enabled drivers build config 00:04:23.115 net/tap: not in enabled drivers build config 00:04:23.115 net/thunderx: not in enabled drivers build config 00:04:23.115 net/txgbe: not in enabled drivers build config 00:04:23.115 net/vdev_netvsc: not in enabled drivers build config 00:04:23.115 net/vhost: not in enabled drivers build config 00:04:23.115 net/virtio: not in enabled drivers build config 00:04:23.115 net/vmxnet3: not in enabled drivers build config 00:04:23.115 raw/*: missing internal dependency, "rawdev" 00:04:23.115 crypto/armv8: not in enabled drivers build config 00:04:23.115 crypto/bcmfs: not in enabled drivers build config 00:04:23.115 crypto/caam_jr: not in enabled drivers build config 00:04:23.115 crypto/ccp: not in enabled drivers build config 00:04:23.115 crypto/cnxk: not in enabled drivers build config 00:04:23.115 crypto/dpaa_sec: not in enabled drivers build config 00:04:23.115 crypto/dpaa2_sec: not in enabled drivers build config 00:04:23.115 crypto/ipsec_mb: not in enabled drivers build config 00:04:23.115 crypto/mlx5: not in enabled drivers build config 00:04:23.115 crypto/mvsam: not in enabled drivers build config 00:04:23.115 crypto/nitrox: not in enabled drivers build config 00:04:23.115 crypto/null: not in enabled drivers build config 00:04:23.115 crypto/octeontx: not in enabled drivers build config 00:04:23.115 crypto/openssl: not in enabled drivers build config 00:04:23.115 crypto/scheduler: not in enabled drivers build config 00:04:23.115 crypto/uadk: not in enabled drivers build config 00:04:23.115 crypto/virtio: not in enabled drivers build config 00:04:23.115 compress/isal: not in enabled drivers build config 00:04:23.115 compress/mlx5: not in enabled drivers build config 00:04:23.115 compress/nitrox: not in enabled drivers build config 00:04:23.115 compress/octeontx: not in enabled drivers build config 00:04:23.115 compress/zlib: not in enabled drivers build config 00:04:23.115 regex/*: missing internal dependency, "regexdev" 00:04:23.115 ml/*: missing internal dependency, "mldev" 00:04:23.115 vdpa/ifc: not in enabled drivers build config 00:04:23.115 vdpa/mlx5: not in enabled drivers build config 00:04:23.115 vdpa/nfp: not in enabled drivers build config 00:04:23.115 vdpa/sfc: not in enabled drivers build config 00:04:23.115 event/*: missing internal dependency, "eventdev" 00:04:23.115 baseband/*: missing internal dependency, "bbdev" 00:04:23.115 gpu/*: missing internal dependency, "gpudev" 00:04:23.115 00:04:23.115 00:04:23.116 Build targets in project: 84 00:04:23.116 00:04:23.116 DPDK 24.03.0 00:04:23.116 00:04:23.116 User defined options 00:04:23.116 buildtype : debug 00:04:23.116 default_library : shared 00:04:23.116 libdir : lib 00:04:23.116 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:23.116 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:23.116 c_link_args : 00:04:23.116 cpu_instruction_set: native 00:04:23.116 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:04:23.116 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:04:23.116 enable_docs : false 00:04:23.116 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:23.116 enable_kmods : false 00:04:23.116 max_lcores : 128 00:04:23.116 tests : false 00:04:23.116 00:04:23.116 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:23.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:23.116 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:23.116 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:23.116 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:23.116 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:23.116 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:23.116 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:23.116 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:23.116 [8/267] Linking static target lib/librte_kvargs.a 00:04:23.116 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:23.116 [10/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:23.116 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:23.116 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:23.116 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:23.376 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:23.376 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:23.376 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:23.376 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:23.376 [18/267] Linking static target lib/librte_log.a 00:04:23.376 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:23.376 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:23.376 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:23.376 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:23.376 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:23.376 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:23.376 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:23.376 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:23.376 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:23.376 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:23.376 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:23.376 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:23.376 [31/267] Linking static target lib/librte_pci.a 00:04:23.376 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:23.376 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:23.376 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:23.376 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:23.376 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:23.376 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:23.376 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:23.636 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:23.636 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.636 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:23.637 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:23.637 [43/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:23.637 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:23.637 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.637 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:23.637 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:23.637 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:23.637 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:23.637 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:23.637 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:23.637 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:23.637 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:23.637 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:23.637 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:23.637 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:23.637 [57/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:23.637 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:23.637 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:23.637 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:23.637 [61/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:23.637 [62/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:23.637 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:23.637 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:23.637 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:23.637 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:23.637 [67/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:23.637 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:23.637 [69/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:23.637 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:23.637 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:23.637 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:23.637 [73/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:23.637 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:23.637 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:23.637 [76/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:23.637 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:23.637 [78/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:23.637 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:23.637 [80/267] Linking static target lib/librte_meter.a 00:04:23.637 [81/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:23.637 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:23.637 [83/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:23.637 [84/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:23.898 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:23.898 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:23.898 [87/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:04:23.898 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:23.898 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:23.898 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:23.898 [91/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:23.898 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:23.898 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:23.898 [94/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:23.898 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:23.898 [96/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:23.898 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:23.898 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:23.898 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:23.898 [100/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:23.898 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:23.898 [102/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:23.898 [103/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:23.898 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:23.898 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:23.898 [106/267] Linking static target lib/librte_cmdline.a 00:04:23.898 [107/267] Linking static target lib/librte_telemetry.a 00:04:23.898 [108/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:23.898 [109/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:23.898 [110/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:23.898 [111/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:23.898 [112/267] Linking static target lib/librte_ring.a 00:04:23.898 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:23.898 [114/267] Linking static target lib/librte_timer.a 00:04:23.898 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:23.898 [116/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:23.898 [117/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:23.898 [118/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:23.898 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:23.898 [120/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:23.898 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:23.898 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:23.898 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:23.898 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:23.898 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:23.898 [126/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:23.898 [127/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:23.898 [128/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:23.898 [129/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:23.898 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:23.898 [131/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:23.898 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.898 [133/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:23.898 [134/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:23.898 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:23.898 [136/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:23.898 [137/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:23.898 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:23.898 [139/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:23.898 [140/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:23.898 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:23.898 [142/267] Linking static target lib/librte_net.a 00:04:23.898 [143/267] Linking static target lib/librte_rcu.a 00:04:23.898 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:23.898 [145/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:23.898 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:23.898 [147/267] Linking static target lib/librte_compressdev.a 00:04:23.898 [148/267] Linking static target lib/librte_mempool.a 00:04:23.898 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:23.898 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:23.898 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:23.898 [152/267] Linking static target lib/librte_dmadev.a 00:04:23.899 [153/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:23.899 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:23.899 [155/267] Linking target lib/librte_log.so.24.1 00:04:23.899 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:23.899 [157/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:23.899 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:23.899 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:23.899 [160/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:23.899 [161/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:23.899 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:23.899 [163/267] Linking static target lib/librte_reorder.a 00:04:23.899 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:23.899 [165/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:23.899 [166/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:23.899 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:23.899 [168/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:23.899 [169/267] Linking static target lib/librte_power.a 00:04:23.899 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:23.899 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:23.899 [172/267] Linking static target lib/librte_eal.a 00:04:23.899 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:23.899 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:23.899 [175/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.899 [176/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:24.160 [177/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:24.160 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:24.160 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:24.160 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:24.160 [181/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:24.160 [182/267] Linking static target lib/librte_security.a 00:04:24.160 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:24.160 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:24.160 [185/267] Linking target lib/librte_kvargs.so.24.1 00:04:24.160 [186/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:24.160 [187/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:24.160 [188/267] Linking static target drivers/librte_bus_vdev.a 00:04:24.160 [189/267] Linking static target lib/librte_mbuf.a 00:04:24.160 [190/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:24.160 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:24.160 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.160 [193/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:24.160 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:24.160 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:24.160 [196/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:24.160 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:24.160 [198/267] Linking static target drivers/librte_bus_pci.a 00:04:24.160 [199/267] Linking static target lib/librte_hash.a 00:04:24.160 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:24.160 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.160 [202/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:24.420 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:24.420 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:24.420 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.420 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.420 [207/267] Linking static target drivers/librte_mempool_ring.a 00:04:24.420 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:24.420 [209/267] Linking static target lib/librte_cryptodev.a 00:04:24.420 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.421 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.421 [212/267] Linking target lib/librte_telemetry.so.24.1 00:04:24.421 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.682 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:24.682 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.682 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.682 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:24.682 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.682 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:24.682 [220/267] Linking static target lib/librte_ethdev.a 00:04:24.943 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.943 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.943 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.204 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.204 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.204 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:26.148 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:26.148 [228/267] Linking static target lib/librte_vhost.a 00:04:26.723 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:28.109 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.780 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.722 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.722 [233/267] Linking target lib/librte_eal.so.24.1 00:04:35.722 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:35.722 [235/267] Linking target lib/librte_meter.so.24.1 00:04:35.722 [236/267] Linking target lib/librte_ring.so.24.1 00:04:35.722 [237/267] Linking target lib/librte_pci.so.24.1 00:04:35.722 [238/267] Linking target lib/librte_timer.so.24.1 00:04:35.722 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:04:35.722 [240/267] Linking target lib/librte_dmadev.so.24.1 00:04:35.981 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:35.981 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:35.981 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:35.981 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:35.981 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:35.981 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:04:35.981 [247/267] Linking target lib/librte_rcu.so.24.1 00:04:35.981 [248/267] Linking target lib/librte_mempool.so.24.1 00:04:36.242 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:36.242 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:36.242 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:04:36.242 [252/267] Linking target lib/librte_mbuf.so.24.1 00:04:36.242 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:36.503 [254/267] Linking target lib/librte_net.so.24.1 00:04:36.503 [255/267] Linking target lib/librte_compressdev.so.24.1 00:04:36.503 [256/267] Linking target lib/librte_reorder.so.24.1 00:04:36.503 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:04:36.503 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:36.503 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:36.503 [260/267] Linking target lib/librte_cmdline.so.24.1 00:04:36.503 [261/267] Linking target lib/librte_hash.so.24.1 00:04:36.503 [262/267] Linking target lib/librte_security.so.24.1 00:04:36.503 [263/267] Linking target lib/librte_ethdev.so.24.1 00:04:36.764 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:36.764 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:36.764 [266/267] Linking target lib/librte_power.so.24.1 00:04:36.764 [267/267] Linking target lib/librte_vhost.so.24.1 00:04:36.764 INFO: autodetecting backend as ninja 00:04:36.764 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:04:40.063 CC lib/log/log.o 00:04:40.063 CC lib/log/log_flags.o 00:04:40.063 CC lib/ut_mock/mock.o 00:04:40.063 CC lib/log/log_deprecated.o 00:04:40.063 CC lib/ut/ut.o 00:04:40.324 LIB libspdk_log.a 00:04:40.324 LIB libspdk_ut.a 00:04:40.324 LIB libspdk_ut_mock.a 00:04:40.324 SO libspdk_log.so.7.1 00:04:40.324 SO libspdk_ut_mock.so.6.0 00:04:40.324 SO libspdk_ut.so.2.0 00:04:40.324 SYMLINK libspdk_log.so 00:04:40.324 SYMLINK libspdk_ut_mock.so 00:04:40.585 SYMLINK libspdk_ut.so 00:04:40.846 CC lib/util/base64.o 00:04:40.846 CC lib/dma/dma.o 00:04:40.846 CC lib/util/bit_array.o 00:04:40.846 CC lib/util/cpuset.o 00:04:40.846 CC lib/util/crc16.o 00:04:40.846 CC lib/util/crc32.o 00:04:40.846 CC lib/util/crc32c.o 00:04:40.846 CC lib/util/crc32_ieee.o 00:04:40.846 CC lib/util/crc64.o 00:04:40.846 CC lib/util/dif.o 00:04:40.846 CC lib/ioat/ioat.o 00:04:40.846 CXX lib/trace_parser/trace.o 00:04:40.846 CC lib/util/fd.o 00:04:40.846 CC lib/util/fd_group.o 00:04:40.846 CC lib/util/file.o 00:04:40.846 CC lib/util/hexlify.o 00:04:40.846 CC lib/util/iov.o 00:04:40.846 CC lib/util/math.o 00:04:40.846 CC lib/util/net.o 00:04:40.846 CC lib/util/pipe.o 00:04:40.846 CC lib/util/strerror_tls.o 00:04:40.846 CC lib/util/string.o 00:04:40.846 CC lib/util/uuid.o 00:04:40.846 CC lib/util/xor.o 00:04:40.846 CC lib/util/zipf.o 00:04:40.846 CC lib/util/md5.o 00:04:41.107 CC lib/vfio_user/host/vfio_user_pci.o 00:04:41.107 CC lib/vfio_user/host/vfio_user.o 00:04:41.107 LIB libspdk_dma.a 00:04:41.107 SO libspdk_dma.so.5.0 00:04:41.107 LIB libspdk_ioat.a 00:04:41.107 SO libspdk_ioat.so.7.0 00:04:41.107 SYMLINK libspdk_dma.so 00:04:41.107 SYMLINK libspdk_ioat.so 00:04:41.372 LIB libspdk_vfio_user.a 00:04:41.372 SO libspdk_vfio_user.so.5.0 00:04:41.372 LIB libspdk_util.a 00:04:41.372 SYMLINK libspdk_vfio_user.so 00:04:41.372 SO libspdk_util.so.10.1 00:04:41.635 SYMLINK libspdk_util.so 00:04:41.635 LIB libspdk_trace_parser.a 00:04:41.635 SO libspdk_trace_parser.so.6.0 00:04:41.897 SYMLINK libspdk_trace_parser.so 00:04:41.897 CC lib/conf/conf.o 00:04:41.897 CC lib/idxd/idxd.o 00:04:41.897 CC lib/json/json_parse.o 00:04:41.897 CC lib/idxd/idxd_user.o 00:04:41.897 CC lib/json/json_util.o 00:04:41.897 CC lib/idxd/idxd_kernel.o 00:04:41.897 CC lib/json/json_write.o 00:04:41.897 CC lib/env_dpdk/env.o 00:04:41.897 CC lib/env_dpdk/memory.o 00:04:41.897 CC lib/env_dpdk/pci.o 00:04:41.897 CC lib/env_dpdk/init.o 00:04:41.897 CC lib/env_dpdk/threads.o 00:04:41.897 CC lib/rdma_utils/rdma_utils.o 00:04:41.897 CC lib/env_dpdk/pci_ioat.o 00:04:41.897 CC lib/vmd/vmd.o 00:04:41.897 CC lib/env_dpdk/pci_virtio.o 00:04:41.897 CC lib/vmd/led.o 00:04:41.897 CC lib/env_dpdk/pci_vmd.o 00:04:41.897 CC lib/env_dpdk/pci_idxd.o 00:04:41.897 CC lib/env_dpdk/pci_event.o 00:04:41.897 CC lib/env_dpdk/sigbus_handler.o 00:04:41.897 CC lib/env_dpdk/pci_dpdk.o 00:04:41.897 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.897 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:42.158 LIB libspdk_conf.a 00:04:42.158 SO libspdk_conf.so.6.0 00:04:42.158 LIB libspdk_rdma_utils.a 00:04:42.158 LIB libspdk_json.a 00:04:42.419 SO libspdk_rdma_utils.so.1.0 00:04:42.419 SO libspdk_json.so.6.0 00:04:42.419 SYMLINK libspdk_conf.so 00:04:42.419 SYMLINK libspdk_rdma_utils.so 00:04:42.419 SYMLINK libspdk_json.so 00:04:42.419 LIB libspdk_idxd.a 00:04:42.419 SO libspdk_idxd.so.12.1 00:04:42.419 LIB libspdk_vmd.a 00:04:42.679 SO libspdk_vmd.so.6.0 00:04:42.679 SYMLINK libspdk_idxd.so 00:04:42.679 SYMLINK libspdk_vmd.so 00:04:42.679 CC lib/rdma_provider/common.o 00:04:42.679 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:42.679 CC lib/jsonrpc/jsonrpc_server.o 00:04:42.679 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:42.679 CC lib/jsonrpc/jsonrpc_client.o 00:04:42.679 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:42.941 LIB libspdk_rdma_provider.a 00:04:42.941 SO libspdk_rdma_provider.so.7.0 00:04:42.941 LIB libspdk_jsonrpc.a 00:04:42.941 SO libspdk_jsonrpc.so.6.0 00:04:43.202 SYMLINK libspdk_rdma_provider.so 00:04:43.202 SYMLINK libspdk_jsonrpc.so 00:04:43.202 LIB libspdk_env_dpdk.a 00:04:43.202 SO libspdk_env_dpdk.so.15.1 00:04:43.464 SYMLINK libspdk_env_dpdk.so 00:04:43.464 CC lib/rpc/rpc.o 00:04:43.726 LIB libspdk_rpc.a 00:04:43.726 SO libspdk_rpc.so.6.0 00:04:43.726 SYMLINK libspdk_rpc.so 00:04:44.299 CC lib/notify/notify.o 00:04:44.299 CC lib/notify/notify_rpc.o 00:04:44.299 CC lib/keyring/keyring.o 00:04:44.299 CC lib/keyring/keyring_rpc.o 00:04:44.299 CC lib/trace/trace.o 00:04:44.299 CC lib/trace/trace_flags.o 00:04:44.299 CC lib/trace/trace_rpc.o 00:04:44.299 LIB libspdk_notify.a 00:04:44.299 SO libspdk_notify.so.6.0 00:04:44.299 LIB libspdk_keyring.a 00:04:44.299 LIB libspdk_trace.a 00:04:44.560 SO libspdk_keyring.so.2.0 00:04:44.560 SYMLINK libspdk_notify.so 00:04:44.560 SO libspdk_trace.so.11.0 00:04:44.560 SYMLINK libspdk_keyring.so 00:04:44.560 SYMLINK libspdk_trace.so 00:04:44.821 CC lib/sock/sock.o 00:04:44.821 CC lib/sock/sock_rpc.o 00:04:44.821 CC lib/thread/thread.o 00:04:44.821 CC lib/thread/iobuf.o 00:04:45.393 LIB libspdk_sock.a 00:04:45.393 SO libspdk_sock.so.10.0 00:04:45.393 SYMLINK libspdk_sock.so 00:04:45.654 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:45.654 CC lib/nvme/nvme_ctrlr.o 00:04:45.654 CC lib/nvme/nvme_fabric.o 00:04:45.654 CC lib/nvme/nvme_ns_cmd.o 00:04:45.654 CC lib/nvme/nvme_ns.o 00:04:45.654 CC lib/nvme/nvme_pcie_common.o 00:04:45.654 CC lib/nvme/nvme_pcie.o 00:04:45.654 CC lib/nvme/nvme_qpair.o 00:04:45.654 CC lib/nvme/nvme.o 00:04:45.654 CC lib/nvme/nvme_quirks.o 00:04:45.654 CC lib/nvme/nvme_transport.o 00:04:45.654 CC lib/nvme/nvme_discovery.o 00:04:45.654 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:45.654 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:45.654 CC lib/nvme/nvme_tcp.o 00:04:45.654 CC lib/nvme/nvme_opal.o 00:04:45.654 CC lib/nvme/nvme_io_msg.o 00:04:45.654 CC lib/nvme/nvme_poll_group.o 00:04:45.654 CC lib/nvme/nvme_zns.o 00:04:45.654 CC lib/nvme/nvme_stubs.o 00:04:45.654 CC lib/nvme/nvme_auth.o 00:04:45.654 CC lib/nvme/nvme_cuse.o 00:04:45.654 CC lib/nvme/nvme_vfio_user.o 00:04:45.654 CC lib/nvme/nvme_rdma.o 00:04:46.224 LIB libspdk_thread.a 00:04:46.224 SO libspdk_thread.so.11.0 00:04:46.224 SYMLINK libspdk_thread.so 00:04:46.485 CC lib/blob/blobstore.o 00:04:46.485 CC lib/blob/request.o 00:04:46.485 CC lib/accel/accel.o 00:04:46.485 CC lib/virtio/virtio.o 00:04:46.485 CC lib/blob/zeroes.o 00:04:46.485 CC lib/accel/accel_rpc.o 00:04:46.747 CC lib/accel/accel_sw.o 00:04:46.747 CC lib/virtio/virtio_vhost_user.o 00:04:46.747 CC lib/blob/blob_bs_dev.o 00:04:46.747 CC lib/virtio/virtio_vfio_user.o 00:04:46.747 CC lib/virtio/virtio_pci.o 00:04:46.747 CC lib/init/json_config.o 00:04:46.747 CC lib/init/subsystem.o 00:04:46.747 CC lib/fsdev/fsdev.o 00:04:46.747 CC lib/init/subsystem_rpc.o 00:04:46.747 CC lib/fsdev/fsdev_io.o 00:04:46.747 CC lib/init/rpc.o 00:04:46.747 CC lib/fsdev/fsdev_rpc.o 00:04:46.747 CC lib/vfu_tgt/tgt_endpoint.o 00:04:46.747 CC lib/vfu_tgt/tgt_rpc.o 00:04:47.008 LIB libspdk_init.a 00:04:47.008 SO libspdk_init.so.6.0 00:04:47.008 LIB libspdk_vfu_tgt.a 00:04:47.008 LIB libspdk_virtio.a 00:04:47.008 SO libspdk_vfu_tgt.so.3.0 00:04:47.008 SO libspdk_virtio.so.7.0 00:04:47.008 SYMLINK libspdk_init.so 00:04:47.008 SYMLINK libspdk_vfu_tgt.so 00:04:47.008 SYMLINK libspdk_virtio.so 00:04:47.269 LIB libspdk_fsdev.a 00:04:47.269 SO libspdk_fsdev.so.2.0 00:04:47.269 SYMLINK libspdk_fsdev.so 00:04:47.269 CC lib/event/app.o 00:04:47.269 CC lib/event/reactor.o 00:04:47.269 CC lib/event/log_rpc.o 00:04:47.269 CC lib/event/app_rpc.o 00:04:47.269 CC lib/event/scheduler_static.o 00:04:47.531 LIB libspdk_accel.a 00:04:47.531 SO libspdk_accel.so.16.0 00:04:47.793 LIB libspdk_nvme.a 00:04:47.793 SYMLINK libspdk_accel.so 00:04:47.793 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:47.793 LIB libspdk_event.a 00:04:47.793 SO libspdk_nvme.so.15.0 00:04:47.793 SO libspdk_event.so.14.0 00:04:48.055 SYMLINK libspdk_event.so 00:04:48.055 SYMLINK libspdk_nvme.so 00:04:48.055 CC lib/bdev/bdev.o 00:04:48.055 CC lib/bdev/bdev_rpc.o 00:04:48.055 CC lib/bdev/bdev_zone.o 00:04:48.055 CC lib/bdev/part.o 00:04:48.055 CC lib/bdev/scsi_nvme.o 00:04:48.316 LIB libspdk_fuse_dispatcher.a 00:04:48.316 SO libspdk_fuse_dispatcher.so.1.0 00:04:48.578 SYMLINK libspdk_fuse_dispatcher.so 00:04:49.522 LIB libspdk_blob.a 00:04:49.522 SO libspdk_blob.so.12.0 00:04:49.522 SYMLINK libspdk_blob.so 00:04:49.785 CC lib/lvol/lvol.o 00:04:49.785 CC lib/blobfs/blobfs.o 00:04:49.785 CC lib/blobfs/tree.o 00:04:50.358 LIB libspdk_bdev.a 00:04:50.358 SO libspdk_bdev.so.17.0 00:04:50.620 SYMLINK libspdk_bdev.so 00:04:50.620 LIB libspdk_blobfs.a 00:04:50.620 SO libspdk_blobfs.so.11.0 00:04:50.620 LIB libspdk_lvol.a 00:04:50.620 SYMLINK libspdk_blobfs.so 00:04:50.620 SO libspdk_lvol.so.11.0 00:04:50.881 SYMLINK libspdk_lvol.so 00:04:50.881 CC lib/nvmf/ctrlr.o 00:04:50.881 CC lib/nvmf/ctrlr_discovery.o 00:04:50.881 CC lib/nvmf/ctrlr_bdev.o 00:04:50.881 CC lib/nvmf/subsystem.o 00:04:50.881 CC lib/ftl/ftl_core.o 00:04:50.881 CC lib/nvmf/nvmf.o 00:04:50.881 CC lib/nvmf/nvmf_rpc.o 00:04:50.881 CC lib/ftl/ftl_init.o 00:04:50.881 CC lib/nvmf/transport.o 00:04:50.881 CC lib/ftl/ftl_layout.o 00:04:50.881 CC lib/nvmf/tcp.o 00:04:50.881 CC lib/nvmf/stubs.o 00:04:50.881 CC lib/ftl/ftl_debug.o 00:04:50.881 CC lib/nvmf/mdns_server.o 00:04:50.881 CC lib/ftl/ftl_io.o 00:04:50.881 CC lib/nvmf/vfio_user.o 00:04:50.881 CC lib/ftl/ftl_sb.o 00:04:50.881 CC lib/scsi/dev.o 00:04:50.882 CC lib/nvmf/rdma.o 00:04:50.882 CC lib/ftl/ftl_l2p.o 00:04:50.882 CC lib/nbd/nbd.o 00:04:50.882 CC lib/scsi/lun.o 00:04:50.882 CC lib/ftl/ftl_l2p_flat.o 00:04:50.882 CC lib/nvmf/auth.o 00:04:50.882 CC lib/ftl/ftl_nv_cache.o 00:04:50.882 CC lib/nbd/nbd_rpc.o 00:04:50.882 CC lib/scsi/port.o 00:04:50.882 CC lib/ftl/ftl_band.o 00:04:50.882 CC lib/ftl/ftl_band_ops.o 00:04:50.882 CC lib/scsi/scsi.o 00:04:50.882 CC lib/ublk/ublk.o 00:04:50.882 CC lib/ublk/ublk_rpc.o 00:04:50.882 CC lib/scsi/scsi_bdev.o 00:04:50.882 CC lib/ftl/ftl_writer.o 00:04:50.882 CC lib/scsi/scsi_pr.o 00:04:50.882 CC lib/ftl/ftl_rq.o 00:04:50.882 CC lib/ftl/ftl_reloc.o 00:04:50.882 CC lib/scsi/scsi_rpc.o 00:04:50.882 CC lib/scsi/task.o 00:04:50.882 CC lib/ftl/ftl_l2p_cache.o 00:04:50.882 CC lib/ftl/ftl_p2l.o 00:04:50.882 CC lib/ftl/ftl_p2l_log.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:50.882 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:50.882 CC lib/ftl/utils/ftl_conf.o 00:04:50.882 CC lib/ftl/utils/ftl_md.o 00:04:50.882 CC lib/ftl/utils/ftl_mempool.o 00:04:50.882 CC lib/ftl/utils/ftl_property.o 00:04:50.882 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.882 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.882 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.882 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.882 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.882 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.882 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:50.882 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.882 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:50.882 CC lib/ftl/base/ftl_base_bdev.o 00:04:50.882 CC lib/ftl/ftl_trace.o 00:04:50.882 CC lib/ftl/base/ftl_base_dev.o 00:04:51.825 LIB libspdk_nbd.a 00:04:51.825 SO libspdk_nbd.so.7.0 00:04:51.825 LIB libspdk_scsi.a 00:04:51.825 SYMLINK libspdk_nbd.so 00:04:51.826 SO libspdk_scsi.so.9.0 00:04:51.826 LIB libspdk_ublk.a 00:04:51.826 SYMLINK libspdk_scsi.so 00:04:51.826 SO libspdk_ublk.so.3.0 00:04:52.087 SYMLINK libspdk_ublk.so 00:04:52.087 LIB libspdk_ftl.a 00:04:52.087 CC lib/iscsi/conn.o 00:04:52.087 CC lib/iscsi/init_grp.o 00:04:52.087 CC lib/iscsi/iscsi.o 00:04:52.087 CC lib/iscsi/param.o 00:04:52.087 CC lib/iscsi/portal_grp.o 00:04:52.087 CC lib/iscsi/tgt_node.o 00:04:52.087 CC lib/iscsi/iscsi_subsystem.o 00:04:52.087 CC lib/iscsi/iscsi_rpc.o 00:04:52.087 CC lib/iscsi/task.o 00:04:52.087 CC lib/vhost/vhost.o 00:04:52.087 CC lib/vhost/vhost_rpc.o 00:04:52.087 CC lib/vhost/vhost_scsi.o 00:04:52.348 CC lib/vhost/vhost_blk.o 00:04:52.348 CC lib/vhost/rte_vhost_user.o 00:04:52.348 SO libspdk_ftl.so.9.0 00:04:52.610 SYMLINK libspdk_ftl.so 00:04:53.182 LIB libspdk_nvmf.a 00:04:53.182 SO libspdk_nvmf.so.20.0 00:04:53.182 LIB libspdk_vhost.a 00:04:53.182 SO libspdk_vhost.so.8.0 00:04:53.182 SYMLINK libspdk_nvmf.so 00:04:53.443 SYMLINK libspdk_vhost.so 00:04:53.443 LIB libspdk_iscsi.a 00:04:53.443 SO libspdk_iscsi.so.8.0 00:04:53.704 SYMLINK libspdk_iscsi.so 00:04:54.277 CC module/env_dpdk/env_dpdk_rpc.o 00:04:54.278 CC module/vfu_device/vfu_virtio.o 00:04:54.278 CC module/vfu_device/vfu_virtio_blk.o 00:04:54.278 CC module/vfu_device/vfu_virtio_scsi.o 00:04:54.278 CC module/vfu_device/vfu_virtio_rpc.o 00:04:54.278 CC module/vfu_device/vfu_virtio_fs.o 00:04:54.538 LIB libspdk_env_dpdk_rpc.a 00:04:54.538 CC module/sock/posix/posix.o 00:04:54.538 CC module/keyring/file/keyring.o 00:04:54.538 CC module/keyring/file/keyring_rpc.o 00:04:54.538 CC module/accel/ioat/accel_ioat.o 00:04:54.538 CC module/accel/ioat/accel_ioat_rpc.o 00:04:54.538 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:54.538 CC module/accel/iaa/accel_iaa.o 00:04:54.538 CC module/fsdev/aio/fsdev_aio.o 00:04:54.538 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:54.538 CC module/accel/iaa/accel_iaa_rpc.o 00:04:54.538 CC module/fsdev/aio/linux_aio_mgr.o 00:04:54.538 CC module/keyring/linux/keyring.o 00:04:54.538 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:54.538 CC module/accel/error/accel_error.o 00:04:54.538 CC module/keyring/linux/keyring_rpc.o 00:04:54.538 CC module/accel/dsa/accel_dsa.o 00:04:54.538 CC module/accel/error/accel_error_rpc.o 00:04:54.538 CC module/accel/dsa/accel_dsa_rpc.o 00:04:54.538 CC module/blob/bdev/blob_bdev.o 00:04:54.538 CC module/scheduler/gscheduler/gscheduler.o 00:04:54.538 SO libspdk_env_dpdk_rpc.so.6.0 00:04:54.538 SYMLINK libspdk_env_dpdk_rpc.so 00:04:54.538 LIB libspdk_keyring_linux.a 00:04:54.798 LIB libspdk_keyring_file.a 00:04:54.798 LIB libspdk_scheduler_gscheduler.a 00:04:54.798 LIB libspdk_scheduler_dpdk_governor.a 00:04:54.798 SO libspdk_keyring_linux.so.1.0 00:04:54.798 LIB libspdk_accel_ioat.a 00:04:54.798 SO libspdk_keyring_file.so.2.0 00:04:54.798 SO libspdk_scheduler_gscheduler.so.4.0 00:04:54.798 LIB libspdk_scheduler_dynamic.a 00:04:54.798 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:54.798 LIB libspdk_accel_iaa.a 00:04:54.798 LIB libspdk_accel_error.a 00:04:54.798 SO libspdk_accel_ioat.so.6.0 00:04:54.798 SO libspdk_scheduler_dynamic.so.4.0 00:04:54.798 SO libspdk_accel_iaa.so.3.0 00:04:54.798 SYMLINK libspdk_keyring_linux.so 00:04:54.798 LIB libspdk_blob_bdev.a 00:04:54.798 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:54.798 SO libspdk_accel_error.so.2.0 00:04:54.798 SYMLINK libspdk_scheduler_gscheduler.so 00:04:54.798 SYMLINK libspdk_keyring_file.so 00:04:54.798 LIB libspdk_accel_dsa.a 00:04:54.798 SYMLINK libspdk_accel_ioat.so 00:04:54.798 SYMLINK libspdk_scheduler_dynamic.so 00:04:54.798 SO libspdk_blob_bdev.so.12.0 00:04:54.798 SO libspdk_accel_dsa.so.5.0 00:04:54.798 SYMLINK libspdk_accel_iaa.so 00:04:54.798 SYMLINK libspdk_accel_error.so 00:04:54.798 LIB libspdk_vfu_device.a 00:04:54.798 SYMLINK libspdk_blob_bdev.so 00:04:54.798 SYMLINK libspdk_accel_dsa.so 00:04:54.798 SO libspdk_vfu_device.so.3.0 00:04:55.059 SYMLINK libspdk_vfu_device.so 00:04:55.059 LIB libspdk_fsdev_aio.a 00:04:55.059 SO libspdk_fsdev_aio.so.1.0 00:04:55.059 LIB libspdk_sock_posix.a 00:04:55.320 SO libspdk_sock_posix.so.6.0 00:04:55.320 SYMLINK libspdk_fsdev_aio.so 00:04:55.320 SYMLINK libspdk_sock_posix.so 00:04:55.581 CC module/bdev/error/vbdev_error.o 00:04:55.581 CC module/bdev/error/vbdev_error_rpc.o 00:04:55.581 CC module/bdev/delay/vbdev_delay.o 00:04:55.581 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:55.581 CC module/bdev/nvme/bdev_nvme.o 00:04:55.581 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:55.581 CC module/bdev/nvme/nvme_rpc.o 00:04:55.581 CC module/bdev/nvme/bdev_mdns_client.o 00:04:55.581 CC module/bdev/lvol/vbdev_lvol.o 00:04:55.581 CC module/bdev/nvme/vbdev_opal.o 00:04:55.581 CC module/bdev/gpt/gpt.o 00:04:55.581 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:55.581 CC module/bdev/split/vbdev_split.o 00:04:55.581 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:55.581 CC module/bdev/gpt/vbdev_gpt.o 00:04:55.581 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:55.581 CC module/bdev/split/vbdev_split_rpc.o 00:04:55.581 CC module/bdev/null/bdev_null.o 00:04:55.581 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:55.581 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:55.581 CC module/bdev/null/bdev_null_rpc.o 00:04:55.581 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:55.581 CC module/bdev/passthru/vbdev_passthru.o 00:04:55.581 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:55.581 CC module/blobfs/bdev/blobfs_bdev.o 00:04:55.581 CC module/bdev/iscsi/bdev_iscsi.o 00:04:55.581 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:55.581 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:55.581 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:55.581 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:55.581 CC module/bdev/raid/bdev_raid.o 00:04:55.581 CC module/bdev/aio/bdev_aio.o 00:04:55.581 CC module/bdev/raid/bdev_raid_rpc.o 00:04:55.581 CC module/bdev/malloc/bdev_malloc.o 00:04:55.581 CC module/bdev/raid/bdev_raid_sb.o 00:04:55.581 CC module/bdev/aio/bdev_aio_rpc.o 00:04:55.581 CC module/bdev/raid/raid0.o 00:04:55.581 CC module/bdev/ftl/bdev_ftl.o 00:04:55.581 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:55.581 CC module/bdev/raid/raid1.o 00:04:55.581 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:55.581 CC module/bdev/raid/concat.o 00:04:55.840 LIB libspdk_blobfs_bdev.a 00:04:55.840 LIB libspdk_bdev_error.a 00:04:55.840 LIB libspdk_bdev_split.a 00:04:55.841 SO libspdk_blobfs_bdev.so.6.0 00:04:55.841 SO libspdk_bdev_error.so.6.0 00:04:55.841 LIB libspdk_bdev_gpt.a 00:04:55.841 LIB libspdk_bdev_null.a 00:04:55.841 SO libspdk_bdev_split.so.6.0 00:04:55.841 SO libspdk_bdev_null.so.6.0 00:04:55.841 SO libspdk_bdev_gpt.so.6.0 00:04:55.841 SYMLINK libspdk_bdev_error.so 00:04:55.841 SYMLINK libspdk_blobfs_bdev.so 00:04:55.841 LIB libspdk_bdev_ftl.a 00:04:55.841 LIB libspdk_bdev_passthru.a 00:04:55.841 LIB libspdk_bdev_delay.a 00:04:55.841 SYMLINK libspdk_bdev_split.so 00:04:55.841 LIB libspdk_bdev_aio.a 00:04:55.841 SYMLINK libspdk_bdev_gpt.so 00:04:55.841 LIB libspdk_bdev_zone_block.a 00:04:55.841 SO libspdk_bdev_ftl.so.6.0 00:04:55.841 SYMLINK libspdk_bdev_null.so 00:04:55.841 SO libspdk_bdev_passthru.so.6.0 00:04:55.841 LIB libspdk_bdev_iscsi.a 00:04:55.841 SO libspdk_bdev_delay.so.6.0 00:04:56.101 LIB libspdk_bdev_malloc.a 00:04:56.101 SO libspdk_bdev_aio.so.6.0 00:04:56.101 SO libspdk_bdev_zone_block.so.6.0 00:04:56.101 SO libspdk_bdev_iscsi.so.6.0 00:04:56.101 SO libspdk_bdev_malloc.so.6.0 00:04:56.101 SYMLINK libspdk_bdev_ftl.so 00:04:56.101 SYMLINK libspdk_bdev_passthru.so 00:04:56.101 SYMLINK libspdk_bdev_delay.so 00:04:56.101 SYMLINK libspdk_bdev_aio.so 00:04:56.101 SYMLINK libspdk_bdev_zone_block.so 00:04:56.101 SYMLINK libspdk_bdev_iscsi.so 00:04:56.101 LIB libspdk_bdev_lvol.a 00:04:56.101 LIB libspdk_bdev_virtio.a 00:04:56.101 SYMLINK libspdk_bdev_malloc.so 00:04:56.101 SO libspdk_bdev_virtio.so.6.0 00:04:56.101 SO libspdk_bdev_lvol.so.6.0 00:04:56.101 SYMLINK libspdk_bdev_virtio.so 00:04:56.101 SYMLINK libspdk_bdev_lvol.so 00:04:56.363 LIB libspdk_bdev_raid.a 00:04:56.624 SO libspdk_bdev_raid.so.6.0 00:04:56.624 SYMLINK libspdk_bdev_raid.so 00:04:58.010 LIB libspdk_bdev_nvme.a 00:04:58.010 SO libspdk_bdev_nvme.so.7.1 00:04:58.010 SYMLINK libspdk_bdev_nvme.so 00:04:58.582 CC module/event/subsystems/iobuf/iobuf.o 00:04:58.582 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:58.582 CC module/event/subsystems/vmd/vmd.o 00:04:58.582 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:58.582 CC module/event/subsystems/sock/sock.o 00:04:58.582 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:58.582 CC module/event/subsystems/scheduler/scheduler.o 00:04:58.582 CC module/event/subsystems/keyring/keyring.o 00:04:58.582 CC module/event/subsystems/fsdev/fsdev.o 00:04:58.582 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:58.844 LIB libspdk_event_vfu_tgt.a 00:04:58.844 LIB libspdk_event_scheduler.a 00:04:58.844 LIB libspdk_event_vmd.a 00:04:58.844 LIB libspdk_event_keyring.a 00:04:58.844 LIB libspdk_event_sock.a 00:04:58.844 LIB libspdk_event_iobuf.a 00:04:58.844 LIB libspdk_event_vhost_blk.a 00:04:58.844 LIB libspdk_event_fsdev.a 00:04:58.844 SO libspdk_event_scheduler.so.4.0 00:04:58.844 SO libspdk_event_vfu_tgt.so.3.0 00:04:58.844 SO libspdk_event_keyring.so.1.0 00:04:58.844 SO libspdk_event_vmd.so.6.0 00:04:58.844 SO libspdk_event_sock.so.5.0 00:04:58.844 SO libspdk_event_iobuf.so.3.0 00:04:58.844 SO libspdk_event_vhost_blk.so.3.0 00:04:58.844 SO libspdk_event_fsdev.so.1.0 00:04:58.844 SYMLINK libspdk_event_scheduler.so 00:04:59.105 SYMLINK libspdk_event_vfu_tgt.so 00:04:59.106 SYMLINK libspdk_event_keyring.so 00:04:59.106 SYMLINK libspdk_event_vhost_blk.so 00:04:59.106 SYMLINK libspdk_event_vmd.so 00:04:59.106 SYMLINK libspdk_event_sock.so 00:04:59.106 SYMLINK libspdk_event_iobuf.so 00:04:59.106 SYMLINK libspdk_event_fsdev.so 00:04:59.367 CC module/event/subsystems/accel/accel.o 00:04:59.628 LIB libspdk_event_accel.a 00:04:59.628 SO libspdk_event_accel.so.6.0 00:04:59.628 SYMLINK libspdk_event_accel.so 00:04:59.888 CC module/event/subsystems/bdev/bdev.o 00:05:00.151 LIB libspdk_event_bdev.a 00:05:00.151 SO libspdk_event_bdev.so.6.0 00:05:00.151 SYMLINK libspdk_event_bdev.so 00:05:00.723 CC module/event/subsystems/ublk/ublk.o 00:05:00.723 CC module/event/subsystems/nbd/nbd.o 00:05:00.723 CC module/event/subsystems/scsi/scsi.o 00:05:00.723 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:00.723 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:00.723 LIB libspdk_event_ublk.a 00:05:00.723 SO libspdk_event_ublk.so.3.0 00:05:00.723 LIB libspdk_event_nbd.a 00:05:00.723 LIB libspdk_event_scsi.a 00:05:00.723 SO libspdk_event_nbd.so.6.0 00:05:00.984 SO libspdk_event_scsi.so.6.0 00:05:00.984 SYMLINK libspdk_event_ublk.so 00:05:00.984 LIB libspdk_event_nvmf.a 00:05:00.984 SYMLINK libspdk_event_nbd.so 00:05:00.984 SO libspdk_event_nvmf.so.6.0 00:05:00.984 SYMLINK libspdk_event_scsi.so 00:05:00.984 SYMLINK libspdk_event_nvmf.so 00:05:01.250 CC module/event/subsystems/iscsi/iscsi.o 00:05:01.250 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:01.512 LIB libspdk_event_vhost_scsi.a 00:05:01.512 LIB libspdk_event_iscsi.a 00:05:01.512 SO libspdk_event_iscsi.so.6.0 00:05:01.512 SO libspdk_event_vhost_scsi.so.3.0 00:05:01.512 SYMLINK libspdk_event_iscsi.so 00:05:01.512 SYMLINK libspdk_event_vhost_scsi.so 00:05:01.774 SO libspdk.so.6.0 00:05:01.774 SYMLINK libspdk.so 00:05:02.349 CXX app/trace/trace.o 00:05:02.349 CC app/trace_record/trace_record.o 00:05:02.349 CC test/rpc_client/rpc_client_test.o 00:05:02.349 TEST_HEADER include/spdk/accel.h 00:05:02.349 TEST_HEADER include/spdk/accel_module.h 00:05:02.349 TEST_HEADER include/spdk/assert.h 00:05:02.349 CC app/spdk_nvme_identify/identify.o 00:05:02.349 CC app/spdk_nvme_discover/discovery_aer.o 00:05:02.349 CC app/spdk_top/spdk_top.o 00:05:02.349 TEST_HEADER include/spdk/barrier.h 00:05:02.349 CC app/spdk_lspci/spdk_lspci.o 00:05:02.349 TEST_HEADER include/spdk/base64.h 00:05:02.349 CC app/spdk_nvme_perf/perf.o 00:05:02.349 TEST_HEADER include/spdk/bdev.h 00:05:02.349 TEST_HEADER include/spdk/bdev_module.h 00:05:02.349 TEST_HEADER include/spdk/bdev_zone.h 00:05:02.349 TEST_HEADER include/spdk/bit_array.h 00:05:02.349 TEST_HEADER include/spdk/bit_pool.h 00:05:02.349 TEST_HEADER include/spdk/blob_bdev.h 00:05:02.349 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:02.349 TEST_HEADER include/spdk/blob.h 00:05:02.349 TEST_HEADER include/spdk/blobfs.h 00:05:02.349 TEST_HEADER include/spdk/conf.h 00:05:02.349 TEST_HEADER include/spdk/cpuset.h 00:05:02.349 TEST_HEADER include/spdk/config.h 00:05:02.349 TEST_HEADER include/spdk/crc16.h 00:05:02.349 TEST_HEADER include/spdk/crc32.h 00:05:02.349 TEST_HEADER include/spdk/crc64.h 00:05:02.349 TEST_HEADER include/spdk/dif.h 00:05:02.349 TEST_HEADER include/spdk/dma.h 00:05:02.349 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:02.349 TEST_HEADER include/spdk/endian.h 00:05:02.349 TEST_HEADER include/spdk/env_dpdk.h 00:05:02.349 TEST_HEADER include/spdk/event.h 00:05:02.349 TEST_HEADER include/spdk/env.h 00:05:02.349 TEST_HEADER include/spdk/fd_group.h 00:05:02.349 TEST_HEADER include/spdk/fd.h 00:05:02.349 TEST_HEADER include/spdk/file.h 00:05:02.349 TEST_HEADER include/spdk/fsdev.h 00:05:02.349 TEST_HEADER include/spdk/fsdev_module.h 00:05:02.349 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:02.349 TEST_HEADER include/spdk/ftl.h 00:05:02.349 TEST_HEADER include/spdk/hexlify.h 00:05:02.349 TEST_HEADER include/spdk/gpt_spec.h 00:05:02.349 TEST_HEADER include/spdk/histogram_data.h 00:05:02.349 TEST_HEADER include/spdk/idxd.h 00:05:02.350 TEST_HEADER include/spdk/idxd_spec.h 00:05:02.350 CC app/nvmf_tgt/nvmf_main.o 00:05:02.350 CC app/iscsi_tgt/iscsi_tgt.o 00:05:02.350 TEST_HEADER include/spdk/init.h 00:05:02.350 CC app/spdk_dd/spdk_dd.o 00:05:02.350 TEST_HEADER include/spdk/ioat.h 00:05:02.350 TEST_HEADER include/spdk/ioat_spec.h 00:05:02.350 TEST_HEADER include/spdk/iscsi_spec.h 00:05:02.350 TEST_HEADER include/spdk/jsonrpc.h 00:05:02.350 TEST_HEADER include/spdk/json.h 00:05:02.350 TEST_HEADER include/spdk/keyring.h 00:05:02.350 TEST_HEADER include/spdk/keyring_module.h 00:05:02.350 TEST_HEADER include/spdk/likely.h 00:05:02.350 TEST_HEADER include/spdk/log.h 00:05:02.350 TEST_HEADER include/spdk/lvol.h 00:05:02.350 TEST_HEADER include/spdk/memory.h 00:05:02.350 TEST_HEADER include/spdk/md5.h 00:05:02.350 TEST_HEADER include/spdk/mmio.h 00:05:02.350 TEST_HEADER include/spdk/nbd.h 00:05:02.350 TEST_HEADER include/spdk/notify.h 00:05:02.350 TEST_HEADER include/spdk/net.h 00:05:02.350 TEST_HEADER include/spdk/nvme.h 00:05:02.350 TEST_HEADER include/spdk/nvme_intel.h 00:05:02.350 CC app/spdk_tgt/spdk_tgt.o 00:05:02.350 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:02.350 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:02.350 TEST_HEADER include/spdk/nvme_spec.h 00:05:02.350 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:02.350 TEST_HEADER include/spdk/nvme_zns.h 00:05:02.350 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:02.350 TEST_HEADER include/spdk/nvmf.h 00:05:02.350 TEST_HEADER include/spdk/nvmf_spec.h 00:05:02.350 TEST_HEADER include/spdk/opal.h 00:05:02.350 TEST_HEADER include/spdk/nvmf_transport.h 00:05:02.350 TEST_HEADER include/spdk/opal_spec.h 00:05:02.350 TEST_HEADER include/spdk/pipe.h 00:05:02.350 TEST_HEADER include/spdk/pci_ids.h 00:05:02.350 TEST_HEADER include/spdk/queue.h 00:05:02.350 TEST_HEADER include/spdk/rpc.h 00:05:02.350 TEST_HEADER include/spdk/reduce.h 00:05:02.350 TEST_HEADER include/spdk/scheduler.h 00:05:02.350 TEST_HEADER include/spdk/scsi.h 00:05:02.350 TEST_HEADER include/spdk/scsi_spec.h 00:05:02.350 TEST_HEADER include/spdk/sock.h 00:05:02.350 TEST_HEADER include/spdk/stdinc.h 00:05:02.350 TEST_HEADER include/spdk/string.h 00:05:02.350 TEST_HEADER include/spdk/thread.h 00:05:02.350 TEST_HEADER include/spdk/trace.h 00:05:02.350 TEST_HEADER include/spdk/trace_parser.h 00:05:02.350 TEST_HEADER include/spdk/tree.h 00:05:02.350 TEST_HEADER include/spdk/ublk.h 00:05:02.350 TEST_HEADER include/spdk/util.h 00:05:02.350 TEST_HEADER include/spdk/version.h 00:05:02.350 TEST_HEADER include/spdk/uuid.h 00:05:02.350 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:02.350 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:02.350 TEST_HEADER include/spdk/vhost.h 00:05:02.350 TEST_HEADER include/spdk/vmd.h 00:05:02.350 TEST_HEADER include/spdk/zipf.h 00:05:02.350 TEST_HEADER include/spdk/xor.h 00:05:02.350 CXX test/cpp_headers/accel.o 00:05:02.350 CXX test/cpp_headers/accel_module.o 00:05:02.350 CXX test/cpp_headers/barrier.o 00:05:02.350 CXX test/cpp_headers/assert.o 00:05:02.350 CXX test/cpp_headers/bdev.o 00:05:02.350 CXX test/cpp_headers/base64.o 00:05:02.350 CXX test/cpp_headers/bdev_module.o 00:05:02.350 CXX test/cpp_headers/bdev_zone.o 00:05:02.350 CXX test/cpp_headers/bit_array.o 00:05:02.350 CXX test/cpp_headers/bit_pool.o 00:05:02.350 CXX test/cpp_headers/blob_bdev.o 00:05:02.350 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.350 CXX test/cpp_headers/blobfs.o 00:05:02.350 CXX test/cpp_headers/blob.o 00:05:02.350 CXX test/cpp_headers/conf.o 00:05:02.350 CXX test/cpp_headers/config.o 00:05:02.350 CXX test/cpp_headers/cpuset.o 00:05:02.350 CXX test/cpp_headers/crc32.o 00:05:02.350 CXX test/cpp_headers/crc16.o 00:05:02.350 CXX test/cpp_headers/crc64.o 00:05:02.350 CXX test/cpp_headers/dif.o 00:05:02.350 CXX test/cpp_headers/dma.o 00:05:02.350 CXX test/cpp_headers/endian.o 00:05:02.350 CXX test/cpp_headers/env_dpdk.o 00:05:02.350 CXX test/cpp_headers/event.o 00:05:02.350 CXX test/cpp_headers/env.o 00:05:02.350 CXX test/cpp_headers/fd_group.o 00:05:02.350 CXX test/cpp_headers/fd.o 00:05:02.350 CXX test/cpp_headers/file.o 00:05:02.350 CXX test/cpp_headers/fsdev.o 00:05:02.350 CXX test/cpp_headers/fsdev_module.o 00:05:02.350 CXX test/cpp_headers/ftl.o 00:05:02.350 CXX test/cpp_headers/fuse_dispatcher.o 00:05:02.350 CXX test/cpp_headers/hexlify.o 00:05:02.350 CXX test/cpp_headers/gpt_spec.o 00:05:02.350 CXX test/cpp_headers/histogram_data.o 00:05:02.350 CXX test/cpp_headers/idxd.o 00:05:02.350 CXX test/cpp_headers/idxd_spec.o 00:05:02.350 CXX test/cpp_headers/ioat.o 00:05:02.350 CXX test/cpp_headers/init.o 00:05:02.350 CXX test/cpp_headers/ioat_spec.o 00:05:02.350 CXX test/cpp_headers/iscsi_spec.o 00:05:02.350 CXX test/cpp_headers/json.o 00:05:02.350 CXX test/cpp_headers/jsonrpc.o 00:05:02.350 CXX test/cpp_headers/likely.o 00:05:02.350 CXX test/cpp_headers/log.o 00:05:02.350 CXX test/cpp_headers/keyring_module.o 00:05:02.350 CXX test/cpp_headers/keyring.o 00:05:02.350 CXX test/cpp_headers/lvol.o 00:05:02.350 CXX test/cpp_headers/memory.o 00:05:02.350 CXX test/cpp_headers/md5.o 00:05:02.350 CXX test/cpp_headers/nbd.o 00:05:02.350 CXX test/cpp_headers/notify.o 00:05:02.350 CXX test/cpp_headers/mmio.o 00:05:02.350 CXX test/cpp_headers/net.o 00:05:02.350 CXX test/cpp_headers/nvme.o 00:05:02.350 CXX test/cpp_headers/nvme_intel.o 00:05:02.350 CXX test/cpp_headers/nvme_ocssd.o 00:05:02.350 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:02.350 CC examples/ioat/perf/perf.o 00:05:02.350 CXX test/cpp_headers/nvmf_cmd.o 00:05:02.350 CXX test/cpp_headers/nvme_spec.o 00:05:02.350 CC examples/ioat/verify/verify.o 00:05:02.350 CXX test/cpp_headers/nvme_zns.o 00:05:02.350 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:02.350 CXX test/cpp_headers/nvmf.o 00:05:02.350 CXX test/cpp_headers/nvmf_spec.o 00:05:02.350 CC test/app/stub/stub.o 00:05:02.350 CXX test/cpp_headers/nvmf_transport.o 00:05:02.350 CC test/app/histogram_perf/histogram_perf.o 00:05:02.350 CXX test/cpp_headers/opal.o 00:05:02.350 CXX test/cpp_headers/opal_spec.o 00:05:02.350 CC test/env/pci/pci_ut.o 00:05:02.350 CXX test/cpp_headers/pci_ids.o 00:05:02.350 CXX test/cpp_headers/queue.o 00:05:02.350 CXX test/cpp_headers/pipe.o 00:05:02.350 CXX test/cpp_headers/reduce.o 00:05:02.350 CC examples/util/zipf/zipf.o 00:05:02.350 CXX test/cpp_headers/scsi.o 00:05:02.350 CXX test/cpp_headers/rpc.o 00:05:02.350 CXX test/cpp_headers/scheduler.o 00:05:02.350 CXX test/cpp_headers/scsi_spec.o 00:05:02.350 CXX test/cpp_headers/sock.o 00:05:02.618 CXX test/cpp_headers/thread.o 00:05:02.618 CXX test/cpp_headers/stdinc.o 00:05:02.618 CXX test/cpp_headers/trace_parser.o 00:05:02.618 LINK spdk_lspci 00:05:02.618 CXX test/cpp_headers/string.o 00:05:02.618 CXX test/cpp_headers/trace.o 00:05:02.618 CC test/app/jsoncat/jsoncat.o 00:05:02.618 CC test/env/vtophys/vtophys.o 00:05:02.618 CXX test/cpp_headers/uuid.o 00:05:02.618 CXX test/cpp_headers/tree.o 00:05:02.618 CC test/app/bdev_svc/bdev_svc.o 00:05:02.618 CXX test/cpp_headers/util.o 00:05:02.618 CXX test/cpp_headers/ublk.o 00:05:02.618 CC app/fio/nvme/fio_plugin.o 00:05:02.618 CC test/env/memory/memory_ut.o 00:05:02.618 CXX test/cpp_headers/version.o 00:05:02.618 CXX test/cpp_headers/vfio_user_pci.o 00:05:02.618 CXX test/cpp_headers/vhost.o 00:05:02.618 CXX test/cpp_headers/vfio_user_spec.o 00:05:02.618 CC test/thread/poller_perf/poller_perf.o 00:05:02.618 CXX test/cpp_headers/vmd.o 00:05:02.618 CXX test/cpp_headers/xor.o 00:05:02.618 CXX test/cpp_headers/zipf.o 00:05:02.618 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:02.618 CC app/fio/bdev/fio_plugin.o 00:05:02.618 LINK rpc_client_test 00:05:02.618 CC test/dma/test_dma/test_dma.o 00:05:02.618 LINK interrupt_tgt 00:05:02.886 LINK spdk_nvme_discover 00:05:02.886 LINK nvmf_tgt 00:05:02.886 LINK spdk_trace_record 00:05:03.151 LINK iscsi_tgt 00:05:03.151 LINK spdk_tgt 00:05:03.151 LINK spdk_trace 00:05:03.151 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:03.151 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:03.151 LINK stub 00:05:03.151 CC test/env/mem_callbacks/mem_callbacks.o 00:05:03.151 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:03.151 LINK zipf 00:05:03.151 LINK spdk_dd 00:05:03.151 LINK histogram_perf 00:05:03.151 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:03.151 LINK jsoncat 00:05:03.411 LINK vtophys 00:05:03.671 LINK poller_perf 00:05:03.671 LINK env_dpdk_post_init 00:05:03.671 LINK bdev_svc 00:05:03.671 LINK verify 00:05:03.671 LINK ioat_perf 00:05:03.671 CC app/vhost/vhost.o 00:05:03.671 LINK spdk_nvme_perf 00:05:03.931 LINK spdk_top 00:05:03.931 LINK spdk_nvme_identify 00:05:03.931 CC examples/idxd/perf/perf.o 00:05:03.931 CC examples/vmd/led/led.o 00:05:03.931 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.931 CC examples/sock/hello_world/hello_sock.o 00:05:03.931 LINK nvme_fuzz 00:05:03.931 LINK pci_ut 00:05:03.931 CC examples/thread/thread/thread_ex.o 00:05:03.931 LINK vhost_fuzz 00:05:03.931 LINK spdk_bdev 00:05:03.931 LINK spdk_nvme 00:05:03.931 LINK vhost 00:05:03.931 LINK test_dma 00:05:04.192 LINK led 00:05:04.192 LINK mem_callbacks 00:05:04.192 LINK lsvmd 00:05:04.192 CC test/event/event_perf/event_perf.o 00:05:04.192 CC test/event/reactor/reactor.o 00:05:04.192 CC test/event/reactor_perf/reactor_perf.o 00:05:04.192 CC test/event/app_repeat/app_repeat.o 00:05:04.192 CC test/event/scheduler/scheduler.o 00:05:04.192 LINK hello_sock 00:05:04.192 LINK thread 00:05:04.192 LINK idxd_perf 00:05:04.192 LINK event_perf 00:05:04.192 LINK reactor 00:05:04.192 LINK reactor_perf 00:05:04.453 LINK app_repeat 00:05:04.453 LINK memory_ut 00:05:04.453 LINK scheduler 00:05:04.713 CC test/nvme/aer/aer.o 00:05:04.713 CC test/nvme/startup/startup.o 00:05:04.713 CC test/nvme/e2edp/nvme_dp.o 00:05:04.713 CC test/nvme/sgl/sgl.o 00:05:04.713 CC test/nvme/reset/reset.o 00:05:04.713 CC test/nvme/simple_copy/simple_copy.o 00:05:04.713 CC test/nvme/err_injection/err_injection.o 00:05:04.713 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.713 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:04.713 CC test/nvme/overhead/overhead.o 00:05:04.713 CC test/nvme/boot_partition/boot_partition.o 00:05:04.713 CC test/nvme/reserve/reserve.o 00:05:04.713 CC test/nvme/compliance/nvme_compliance.o 00:05:04.713 CC test/nvme/cuse/cuse.o 00:05:04.713 CC test/nvme/connect_stress/connect_stress.o 00:05:04.713 CC test/nvme/fdp/fdp.o 00:05:04.713 CC test/blobfs/mkfs/mkfs.o 00:05:04.713 CC test/accel/dif/dif.o 00:05:04.713 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:04.713 CC examples/nvme/reconnect/reconnect.o 00:05:04.713 CC examples/nvme/arbitration/arbitration.o 00:05:04.713 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:04.713 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:04.713 CC examples/nvme/hotplug/hotplug.o 00:05:04.713 CC examples/nvme/abort/abort.o 00:05:04.713 CC examples/nvme/hello_world/hello_world.o 00:05:04.713 CC test/lvol/esnap/esnap.o 00:05:04.972 CC examples/accel/perf/accel_perf.o 00:05:04.972 LINK startup 00:05:04.972 CC examples/blob/hello_world/hello_blob.o 00:05:04.972 CC examples/blob/cli/blobcli.o 00:05:04.972 LINK boot_partition 00:05:04.972 LINK err_injection 00:05:04.972 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:04.972 LINK doorbell_aers 00:05:04.972 LINK connect_stress 00:05:04.972 LINK reserve 00:05:04.972 LINK fused_ordering 00:05:04.972 LINK mkfs 00:05:04.972 LINK sgl 00:05:04.972 LINK simple_copy 00:05:04.972 LINK nvme_dp 00:05:04.972 LINK iscsi_fuzz 00:05:04.972 LINK reset 00:05:04.972 LINK pmr_persistence 00:05:04.972 LINK aer 00:05:04.972 LINK overhead 00:05:04.972 LINK cmb_copy 00:05:04.972 LINK hotplug 00:05:04.972 LINK nvme_compliance 00:05:04.973 LINK hello_world 00:05:04.973 LINK fdp 00:05:05.232 LINK reconnect 00:05:05.232 LINK arbitration 00:05:05.232 LINK abort 00:05:05.232 LINK hello_blob 00:05:05.232 LINK hello_fsdev 00:05:05.232 LINK nvme_manage 00:05:05.493 LINK dif 00:05:05.493 LINK accel_perf 00:05:05.493 LINK blobcli 00:05:06.066 LINK cuse 00:05:06.066 CC examples/bdev/hello_world/hello_bdev.o 00:05:06.066 CC examples/bdev/bdevperf/bdevperf.o 00:05:06.066 CC test/bdev/bdevio/bdevio.o 00:05:06.329 LINK hello_bdev 00:05:06.329 LINK bdevio 00:05:06.591 LINK bdevperf 00:05:07.532 CC examples/nvmf/nvmf/nvmf.o 00:05:07.532 LINK nvmf 00:05:09.446 LINK esnap 00:05:09.446 00:05:09.446 real 0m56.193s 00:05:09.446 user 8m8.519s 00:05:09.446 sys 5m39.315s 00:05:09.446 07:14:37 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:09.446 07:14:37 make -- common/autotest_common.sh@10 -- $ set +x 00:05:09.446 ************************************ 00:05:09.446 END TEST make 00:05:09.446 ************************************ 00:05:09.446 07:14:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:09.446 07:14:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:09.446 07:14:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:09.446 07:14:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.446 07:14:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:09.446 07:14:37 -- pm/common@44 -- $ pid=1131018 00:05:09.446 07:14:37 -- pm/common@50 -- $ kill -TERM 1131018 00:05:09.446 07:14:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.446 07:14:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:09.446 07:14:37 -- pm/common@44 -- $ pid=1131019 00:05:09.446 07:14:37 -- pm/common@50 -- $ kill -TERM 1131019 00:05:09.446 07:14:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.446 07:14:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:09.446 07:14:37 -- pm/common@44 -- $ pid=1131021 00:05:09.446 07:14:37 -- pm/common@50 -- $ kill -TERM 1131021 00:05:09.446 07:14:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.446 07:14:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:09.446 07:14:37 -- pm/common@44 -- $ pid=1131045 00:05:09.446 07:14:37 -- pm/common@50 -- $ sudo -E kill -TERM 1131045 00:05:09.705 07:14:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:09.705 07:14:37 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:09.705 07:14:37 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.705 07:14:37 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.705 07:14:37 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.705 07:14:37 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.705 07:14:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.705 07:14:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.705 07:14:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.705 07:14:37 -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.705 07:14:37 -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.706 07:14:37 -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.706 07:14:37 -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.706 07:14:37 -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.706 07:14:37 -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.706 07:14:37 -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.706 07:14:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.706 07:14:37 -- scripts/common.sh@344 -- # case "$op" in 00:05:09.706 07:14:37 -- scripts/common.sh@345 -- # : 1 00:05:09.706 07:14:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.706 07:14:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.706 07:14:37 -- scripts/common.sh@365 -- # decimal 1 00:05:09.706 07:14:37 -- scripts/common.sh@353 -- # local d=1 00:05:09.706 07:14:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.706 07:14:37 -- scripts/common.sh@355 -- # echo 1 00:05:09.706 07:14:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.706 07:14:37 -- scripts/common.sh@366 -- # decimal 2 00:05:09.706 07:14:37 -- scripts/common.sh@353 -- # local d=2 00:05:09.706 07:14:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.706 07:14:37 -- scripts/common.sh@355 -- # echo 2 00:05:09.706 07:14:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.706 07:14:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.706 07:14:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.706 07:14:37 -- scripts/common.sh@368 -- # return 0 00:05:09.706 07:14:37 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.706 07:14:37 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.706 --rc genhtml_branch_coverage=1 00:05:09.706 --rc genhtml_function_coverage=1 00:05:09.706 --rc genhtml_legend=1 00:05:09.706 --rc geninfo_all_blocks=1 00:05:09.706 --rc geninfo_unexecuted_blocks=1 00:05:09.706 00:05:09.706 ' 00:05:09.706 07:14:37 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.706 --rc genhtml_branch_coverage=1 00:05:09.706 --rc genhtml_function_coverage=1 00:05:09.706 --rc genhtml_legend=1 00:05:09.706 --rc geninfo_all_blocks=1 00:05:09.706 --rc geninfo_unexecuted_blocks=1 00:05:09.706 00:05:09.706 ' 00:05:09.706 07:14:37 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.706 --rc genhtml_branch_coverage=1 00:05:09.706 --rc genhtml_function_coverage=1 00:05:09.706 --rc genhtml_legend=1 00:05:09.706 --rc geninfo_all_blocks=1 00:05:09.706 --rc geninfo_unexecuted_blocks=1 00:05:09.706 00:05:09.706 ' 00:05:09.706 07:14:37 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.706 --rc genhtml_branch_coverage=1 00:05:09.706 --rc genhtml_function_coverage=1 00:05:09.706 --rc genhtml_legend=1 00:05:09.706 --rc geninfo_all_blocks=1 00:05:09.706 --rc geninfo_unexecuted_blocks=1 00:05:09.706 00:05:09.706 ' 00:05:09.706 07:14:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.706 07:14:37 -- nvmf/common.sh@7 -- # uname -s 00:05:09.706 07:14:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.706 07:14:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.706 07:14:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.706 07:14:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.706 07:14:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.706 07:14:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.706 07:14:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.706 07:14:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.706 07:14:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.706 07:14:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.706 07:14:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:09.706 07:14:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:09.706 07:14:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.706 07:14:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.706 07:14:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:09.706 07:14:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.706 07:14:37 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.706 07:14:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.706 07:14:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.706 07:14:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.706 07:14:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.706 07:14:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.706 07:14:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.706 07:14:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.706 07:14:37 -- paths/export.sh@5 -- # export PATH 00:05:09.706 07:14:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.706 07:14:37 -- nvmf/common.sh@51 -- # : 0 00:05:09.706 07:14:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.706 07:14:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.706 07:14:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.706 07:14:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.706 07:14:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.706 07:14:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.706 07:14:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.706 07:14:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.706 07:14:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.706 07:14:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:09.706 07:14:37 -- spdk/autotest.sh@32 -- # uname -s 00:05:09.706 07:14:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:09.706 07:14:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:09.706 07:14:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:09.706 07:14:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:09.706 07:14:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:09.965 07:14:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:09.965 07:14:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:09.965 07:14:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:09.965 07:14:37 -- spdk/autotest.sh@48 -- # udevadm_pid=1196570 00:05:09.965 07:14:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:09.965 07:14:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:09.965 07:14:37 -- pm/common@17 -- # local monitor 00:05:09.965 07:14:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.965 07:14:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.965 07:14:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.965 07:14:37 -- pm/common@21 -- # date +%s 00:05:09.965 07:14:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:09.965 07:14:37 -- pm/common@21 -- # date +%s 00:05:09.965 07:14:37 -- pm/common@25 -- # sleep 1 00:05:09.965 07:14:37 -- pm/common@21 -- # date +%s 00:05:09.965 07:14:37 -- pm/common@21 -- # date +%s 00:05:09.965 07:14:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601677 00:05:09.965 07:14:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601677 00:05:09.965 07:14:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601677 00:05:09.965 07:14:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732601677 00:05:09.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601677_collect-cpu-load.pm.log 00:05:09.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601677_collect-vmstat.pm.log 00:05:09.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601677_collect-cpu-temp.pm.log 00:05:09.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732601677_collect-bmc-pm.bmc.pm.log 00:05:10.901 07:14:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:10.901 07:14:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:10.901 07:14:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.901 07:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:10.901 07:14:38 -- spdk/autotest.sh@59 -- # create_test_list 00:05:10.901 07:14:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:10.901 07:14:38 -- common/autotest_common.sh@10 -- # set +x 00:05:10.901 07:14:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:10.901 07:14:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.901 07:14:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.901 07:14:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:10.901 07:14:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.901 07:14:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:10.901 07:14:38 -- common/autotest_common.sh@1457 -- # uname 00:05:10.901 07:14:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:10.901 07:14:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:10.901 07:14:38 -- common/autotest_common.sh@1477 -- # uname 00:05:10.901 07:14:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:10.901 07:14:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:10.901 07:14:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:10.901 lcov: LCOV version 1.15 00:05:10.901 07:14:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:37.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:37.564 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:41.761 07:15:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:41.761 07:15:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.761 07:15:09 -- common/autotest_common.sh@10 -- # set +x 00:05:41.761 07:15:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:41.761 07:15:09 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.061 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:45.322 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:45.322 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:45.323 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:45.584 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:45.846 07:15:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:45.846 07:15:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:45.846 07:15:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:45.846 07:15:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:45.846 07:15:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:45.846 07:15:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:45.846 07:15:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:45.846 07:15:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:45.846 07:15:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:45.846 07:15:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:45.846 07:15:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:45.846 07:15:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:45.846 07:15:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:45.846 07:15:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:45.846 07:15:13 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:45.846 No valid GPT data, bailing 00:05:45.846 07:15:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:45.846 07:15:13 -- scripts/common.sh@394 -- # pt= 00:05:45.846 07:15:13 -- scripts/common.sh@395 -- # return 1 00:05:45.846 07:15:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:45.846 1+0 records in 00:05:45.846 1+0 records out 00:05:45.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435203 s, 241 MB/s 00:05:45.846 07:15:13 -- spdk/autotest.sh@105 -- # sync 00:05:45.846 07:15:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:45.846 07:15:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:45.846 07:15:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:55.850 07:15:22 -- spdk/autotest.sh@111 -- # uname -s 00:05:55.850 07:15:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:55.850 07:15:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:55.850 07:15:22 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:58.398 Hugepages 00:05:58.398 node hugesize free / total 00:05:58.398 node0 1048576kB 0 / 0 00:05:58.398 node0 2048kB 0 / 0 00:05:58.398 node1 1048576kB 0 / 0 00:05:58.398 node1 2048kB 0 / 0 00:05:58.398 00:05:58.398 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.398 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:58.398 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:58.398 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:58.398 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:58.398 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:58.398 07:15:26 -- spdk/autotest.sh@117 -- # uname -s 00:05:58.398 07:15:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:58.398 07:15:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:58.398 07:15:26 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.701 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:01.701 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:03.609 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:03.869 07:15:31 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:04.811 07:15:32 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:04.811 07:15:32 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:04.811 07:15:32 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:04.811 07:15:32 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:04.811 07:15:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:04.811 07:15:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:04.811 07:15:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.811 07:15:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:04.811 07:15:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.071 07:15:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:05.071 07:15:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:05.071 07:15:32 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:08.374 Waiting for block devices as requested 00:06:08.374 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:08.636 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:08.636 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:08.636 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:08.895 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:08.895 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:08.895 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:09.155 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:09.155 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:09.417 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:09.417 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:09.417 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:09.677 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:09.677 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:09.677 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:09.938 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:09.938 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:10.199 07:15:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:10.199 07:15:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:06:10.199 07:15:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:10.199 07:15:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:10.199 07:15:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:10.199 07:15:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:10.199 07:15:38 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:06:10.199 07:15:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:10.199 07:15:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:10.199 07:15:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:10.199 07:15:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:10.199 07:15:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:10.199 07:15:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:10.199 07:15:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:10.199 07:15:38 -- common/autotest_common.sh@1543 -- # continue 00:06:10.199 07:15:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:10.199 07:15:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.199 07:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.199 07:15:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:10.199 07:15:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.199 07:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.199 07:15:38 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:14.409 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:14.409 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:14.409 07:15:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:14.409 07:15:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.409 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.409 07:15:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:14.409 07:15:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:14.409 07:15:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:14.409 07:15:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:14.409 07:15:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:14.409 07:15:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:14.409 07:15:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:14.409 07:15:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:14.409 07:15:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:14.409 07:15:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:14.409 07:15:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:14.409 07:15:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:14.409 07:15:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:14.409 07:15:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:14.409 07:15:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:06:14.409 07:15:42 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:14.409 07:15:42 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:14.409 07:15:42 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:06:14.409 07:15:42 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:14.409 07:15:42 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:14.409 07:15:42 -- common/autotest_common.sh@1572 -- # return 0 00:06:14.409 07:15:42 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:14.409 07:15:42 -- common/autotest_common.sh@1580 -- # return 0 00:06:14.409 07:15:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:14.409 07:15:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:14.409 07:15:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:14.409 07:15:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:14.409 07:15:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:14.409 07:15:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.409 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.409 07:15:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:14.409 07:15:42 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:14.409 07:15:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.409 07:15:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.409 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:14.671 ************************************ 00:06:14.671 START TEST env 00:06:14.671 ************************************ 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:14.671 * Looking for test storage... 00:06:14.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.671 07:15:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.671 07:15:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.671 07:15:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.671 07:15:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.671 07:15:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.671 07:15:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.671 07:15:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.671 07:15:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.671 07:15:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.671 07:15:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.671 07:15:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.671 07:15:42 env -- scripts/common.sh@344 -- # case "$op" in 00:06:14.671 07:15:42 env -- scripts/common.sh@345 -- # : 1 00:06:14.671 07:15:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.671 07:15:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.671 07:15:42 env -- scripts/common.sh@365 -- # decimal 1 00:06:14.671 07:15:42 env -- scripts/common.sh@353 -- # local d=1 00:06:14.671 07:15:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.671 07:15:42 env -- scripts/common.sh@355 -- # echo 1 00:06:14.671 07:15:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.671 07:15:42 env -- scripts/common.sh@366 -- # decimal 2 00:06:14.671 07:15:42 env -- scripts/common.sh@353 -- # local d=2 00:06:14.671 07:15:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.671 07:15:42 env -- scripts/common.sh@355 -- # echo 2 00:06:14.671 07:15:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.671 07:15:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.671 07:15:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.671 07:15:42 env -- scripts/common.sh@368 -- # return 0 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.671 --rc genhtml_branch_coverage=1 00:06:14.671 --rc genhtml_function_coverage=1 00:06:14.671 --rc genhtml_legend=1 00:06:14.671 --rc geninfo_all_blocks=1 00:06:14.671 --rc geninfo_unexecuted_blocks=1 00:06:14.671 00:06:14.671 ' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.671 --rc genhtml_branch_coverage=1 00:06:14.671 --rc genhtml_function_coverage=1 00:06:14.671 --rc genhtml_legend=1 00:06:14.671 --rc geninfo_all_blocks=1 00:06:14.671 --rc geninfo_unexecuted_blocks=1 00:06:14.671 00:06:14.671 ' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.671 --rc genhtml_branch_coverage=1 00:06:14.671 --rc genhtml_function_coverage=1 00:06:14.671 --rc genhtml_legend=1 00:06:14.671 --rc geninfo_all_blocks=1 00:06:14.671 --rc geninfo_unexecuted_blocks=1 00:06:14.671 00:06:14.671 ' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.671 --rc genhtml_branch_coverage=1 00:06:14.671 --rc genhtml_function_coverage=1 00:06:14.671 --rc genhtml_legend=1 00:06:14.671 --rc geninfo_all_blocks=1 00:06:14.671 --rc geninfo_unexecuted_blocks=1 00:06:14.671 00:06:14.671 ' 00:06:14.671 07:15:42 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.671 07:15:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.671 07:15:42 env -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 ************************************ 00:06:14.933 START TEST env_memory 00:06:14.933 ************************************ 00:06:14.933 07:15:42 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:14.933 00:06:14.933 00:06:14.933 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.933 http://cunit.sourceforge.net/ 00:06:14.933 00:06:14.933 00:06:14.933 Suite: memory 00:06:14.933 Test: alloc and free memory map ...[2024-11-26 07:15:42.821358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:14.933 passed 00:06:14.933 Test: mem map translation ...[2024-11-26 07:15:42.846934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:14.933 [2024-11-26 07:15:42.846964] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:14.933 [2024-11-26 07:15:42.847011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:14.933 [2024-11-26 07:15:42.847019] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:14.933 passed 00:06:14.933 Test: mem map registration ...[2024-11-26 07:15:42.902236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:14.933 [2024-11-26 07:15:42.902272] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:14.933 passed 00:06:14.933 Test: mem map adjacent registrations ...passed 00:06:14.933 00:06:14.933 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.933 suites 1 1 n/a 0 0 00:06:14.933 tests 4 4 4 0 0 00:06:14.933 asserts 152 152 152 0 n/a 00:06:14.933 00:06:14.933 Elapsed time = 0.193 seconds 00:06:14.933 00:06:14.933 real 0m0.208s 00:06:14.933 user 0m0.196s 00:06:14.933 sys 0m0.011s 00:06:14.933 07:15:42 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.933 07:15:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:14.933 ************************************ 00:06:14.933 END TEST env_memory 00:06:14.933 ************************************ 00:06:14.933 07:15:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:14.933 07:15:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.933 07:15:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.933 07:15:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.194 ************************************ 00:06:15.194 START TEST env_vtophys 00:06:15.194 ************************************ 00:06:15.194 07:15:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:15.194 EAL: lib.eal log level changed from notice to debug 00:06:15.194 EAL: Detected lcore 0 as core 0 on socket 0 00:06:15.194 EAL: Detected lcore 1 as core 1 on socket 0 00:06:15.194 EAL: Detected lcore 2 as core 2 on socket 0 00:06:15.195 EAL: Detected lcore 3 as core 3 on socket 0 00:06:15.195 EAL: Detected lcore 4 as core 4 on socket 0 00:06:15.195 EAL: Detected lcore 5 as core 5 on socket 0 00:06:15.195 EAL: Detected lcore 6 as core 6 on socket 0 00:06:15.195 EAL: Detected lcore 7 as core 7 on socket 0 00:06:15.195 EAL: Detected lcore 8 as core 8 on socket 0 00:06:15.195 EAL: Detected lcore 9 as core 9 on socket 0 00:06:15.195 EAL: Detected lcore 10 as core 10 on socket 0 00:06:15.195 EAL: Detected lcore 11 as core 11 on socket 0 00:06:15.195 EAL: Detected lcore 12 as core 12 on socket 0 00:06:15.195 EAL: Detected lcore 13 as core 13 on socket 0 00:06:15.195 EAL: Detected lcore 14 as core 14 on socket 0 00:06:15.195 EAL: Detected lcore 15 as core 15 on socket 0 00:06:15.195 EAL: Detected lcore 16 as core 16 on socket 0 00:06:15.195 EAL: Detected lcore 17 as core 17 on socket 0 00:06:15.195 EAL: Detected lcore 18 as core 18 on socket 0 00:06:15.195 EAL: Detected lcore 19 as core 19 on socket 0 00:06:15.195 EAL: Detected lcore 20 as core 20 on socket 0 00:06:15.195 EAL: Detected lcore 21 as core 21 on socket 0 00:06:15.195 EAL: Detected lcore 22 as core 22 on socket 0 00:06:15.195 EAL: Detected lcore 23 as core 23 on socket 0 00:06:15.195 EAL: Detected lcore 24 as core 24 on socket 0 00:06:15.195 EAL: Detected lcore 25 as core 25 on socket 0 00:06:15.195 EAL: Detected lcore 26 as core 26 on socket 0 00:06:15.195 EAL: Detected lcore 27 as core 27 on socket 0 00:06:15.195 EAL: Detected lcore 28 as core 28 on socket 0 00:06:15.195 EAL: Detected lcore 29 as core 29 on socket 0 00:06:15.195 EAL: Detected lcore 30 as core 30 on socket 0 00:06:15.195 EAL: Detected lcore 31 as core 31 on socket 0 00:06:15.195 EAL: Detected lcore 32 as core 32 on socket 0 00:06:15.195 EAL: Detected lcore 33 as core 33 on socket 0 00:06:15.195 EAL: Detected lcore 34 as core 34 on socket 0 00:06:15.195 EAL: Detected lcore 35 as core 35 on socket 0 00:06:15.195 EAL: Detected lcore 36 as core 0 on socket 1 00:06:15.195 EAL: Detected lcore 37 as core 1 on socket 1 00:06:15.195 EAL: Detected lcore 38 as core 2 on socket 1 00:06:15.195 EAL: Detected lcore 39 as core 3 on socket 1 00:06:15.195 EAL: Detected lcore 40 as core 4 on socket 1 00:06:15.195 EAL: Detected lcore 41 as core 5 on socket 1 00:06:15.195 EAL: Detected lcore 42 as core 6 on socket 1 00:06:15.195 EAL: Detected lcore 43 as core 7 on socket 1 00:06:15.195 EAL: Detected lcore 44 as core 8 on socket 1 00:06:15.195 EAL: Detected lcore 45 as core 9 on socket 1 00:06:15.195 EAL: Detected lcore 46 as core 10 on socket 1 00:06:15.195 EAL: Detected lcore 47 as core 11 on socket 1 00:06:15.195 EAL: Detected lcore 48 as core 12 on socket 1 00:06:15.195 EAL: Detected lcore 49 as core 13 on socket 1 00:06:15.195 EAL: Detected lcore 50 as core 14 on socket 1 00:06:15.195 EAL: Detected lcore 51 as core 15 on socket 1 00:06:15.195 EAL: Detected lcore 52 as core 16 on socket 1 00:06:15.195 EAL: Detected lcore 53 as core 17 on socket 1 00:06:15.195 EAL: Detected lcore 54 as core 18 on socket 1 00:06:15.195 EAL: Detected lcore 55 as core 19 on socket 1 00:06:15.195 EAL: Detected lcore 56 as core 20 on socket 1 00:06:15.195 EAL: Detected lcore 57 as core 21 on socket 1 00:06:15.195 EAL: Detected lcore 58 as core 22 on socket 1 00:06:15.195 EAL: Detected lcore 59 as core 23 on socket 1 00:06:15.195 EAL: Detected lcore 60 as core 24 on socket 1 00:06:15.195 EAL: Detected lcore 61 as core 25 on socket 1 00:06:15.195 EAL: Detected lcore 62 as core 26 on socket 1 00:06:15.195 EAL: Detected lcore 63 as core 27 on socket 1 00:06:15.195 EAL: Detected lcore 64 as core 28 on socket 1 00:06:15.195 EAL: Detected lcore 65 as core 29 on socket 1 00:06:15.195 EAL: Detected lcore 66 as core 30 on socket 1 00:06:15.195 EAL: Detected lcore 67 as core 31 on socket 1 00:06:15.195 EAL: Detected lcore 68 as core 32 on socket 1 00:06:15.195 EAL: Detected lcore 69 as core 33 on socket 1 00:06:15.195 EAL: Detected lcore 70 as core 34 on socket 1 00:06:15.195 EAL: Detected lcore 71 as core 35 on socket 1 00:06:15.195 EAL: Detected lcore 72 as core 0 on socket 0 00:06:15.195 EAL: Detected lcore 73 as core 1 on socket 0 00:06:15.195 EAL: Detected lcore 74 as core 2 on socket 0 00:06:15.195 EAL: Detected lcore 75 as core 3 on socket 0 00:06:15.195 EAL: Detected lcore 76 as core 4 on socket 0 00:06:15.195 EAL: Detected lcore 77 as core 5 on socket 0 00:06:15.195 EAL: Detected lcore 78 as core 6 on socket 0 00:06:15.195 EAL: Detected lcore 79 as core 7 on socket 0 00:06:15.195 EAL: Detected lcore 80 as core 8 on socket 0 00:06:15.195 EAL: Detected lcore 81 as core 9 on socket 0 00:06:15.195 EAL: Detected lcore 82 as core 10 on socket 0 00:06:15.195 EAL: Detected lcore 83 as core 11 on socket 0 00:06:15.195 EAL: Detected lcore 84 as core 12 on socket 0 00:06:15.195 EAL: Detected lcore 85 as core 13 on socket 0 00:06:15.195 EAL: Detected lcore 86 as core 14 on socket 0 00:06:15.195 EAL: Detected lcore 87 as core 15 on socket 0 00:06:15.195 EAL: Detected lcore 88 as core 16 on socket 0 00:06:15.195 EAL: Detected lcore 89 as core 17 on socket 0 00:06:15.195 EAL: Detected lcore 90 as core 18 on socket 0 00:06:15.195 EAL: Detected lcore 91 as core 19 on socket 0 00:06:15.195 EAL: Detected lcore 92 as core 20 on socket 0 00:06:15.195 EAL: Detected lcore 93 as core 21 on socket 0 00:06:15.195 EAL: Detected lcore 94 as core 22 on socket 0 00:06:15.195 EAL: Detected lcore 95 as core 23 on socket 0 00:06:15.195 EAL: Detected lcore 96 as core 24 on socket 0 00:06:15.195 EAL: Detected lcore 97 as core 25 on socket 0 00:06:15.195 EAL: Detected lcore 98 as core 26 on socket 0 00:06:15.195 EAL: Detected lcore 99 as core 27 on socket 0 00:06:15.195 EAL: Detected lcore 100 as core 28 on socket 0 00:06:15.195 EAL: Detected lcore 101 as core 29 on socket 0 00:06:15.195 EAL: Detected lcore 102 as core 30 on socket 0 00:06:15.195 EAL: Detected lcore 103 as core 31 on socket 0 00:06:15.195 EAL: Detected lcore 104 as core 32 on socket 0 00:06:15.195 EAL: Detected lcore 105 as core 33 on socket 0 00:06:15.195 EAL: Detected lcore 106 as core 34 on socket 0 00:06:15.195 EAL: Detected lcore 107 as core 35 on socket 0 00:06:15.195 EAL: Detected lcore 108 as core 0 on socket 1 00:06:15.195 EAL: Detected lcore 109 as core 1 on socket 1 00:06:15.195 EAL: Detected lcore 110 as core 2 on socket 1 00:06:15.195 EAL: Detected lcore 111 as core 3 on socket 1 00:06:15.195 EAL: Detected lcore 112 as core 4 on socket 1 00:06:15.195 EAL: Detected lcore 113 as core 5 on socket 1 00:06:15.195 EAL: Detected lcore 114 as core 6 on socket 1 00:06:15.195 EAL: Detected lcore 115 as core 7 on socket 1 00:06:15.195 EAL: Detected lcore 116 as core 8 on socket 1 00:06:15.195 EAL: Detected lcore 117 as core 9 on socket 1 00:06:15.195 EAL: Detected lcore 118 as core 10 on socket 1 00:06:15.195 EAL: Detected lcore 119 as core 11 on socket 1 00:06:15.195 EAL: Detected lcore 120 as core 12 on socket 1 00:06:15.195 EAL: Detected lcore 121 as core 13 on socket 1 00:06:15.195 EAL: Detected lcore 122 as core 14 on socket 1 00:06:15.195 EAL: Detected lcore 123 as core 15 on socket 1 00:06:15.195 EAL: Detected lcore 124 as core 16 on socket 1 00:06:15.195 EAL: Detected lcore 125 as core 17 on socket 1 00:06:15.195 EAL: Detected lcore 126 as core 18 on socket 1 00:06:15.195 EAL: Detected lcore 127 as core 19 on socket 1 00:06:15.195 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:15.195 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:15.195 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:15.195 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:15.195 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:15.195 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:15.195 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:15.195 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:15.195 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:15.195 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:15.195 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:15.195 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:15.195 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:15.196 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:15.196 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:15.196 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:15.196 EAL: Maximum logical cores by configuration: 128 00:06:15.196 EAL: Detected CPU lcores: 128 00:06:15.196 EAL: Detected NUMA nodes: 2 00:06:15.196 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:15.196 EAL: Detected shared linkage of DPDK 00:06:15.196 EAL: No shared files mode enabled, IPC will be disabled 00:06:15.196 EAL: Bus pci wants IOVA as 'DC' 00:06:15.196 EAL: Buses did not request a specific IOVA mode. 00:06:15.196 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:15.196 EAL: Selected IOVA mode 'VA' 00:06:15.196 EAL: Probing VFIO support... 00:06:15.196 EAL: IOMMU type 1 (Type 1) is supported 00:06:15.196 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:15.196 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:15.196 EAL: VFIO support initialized 00:06:15.196 EAL: Ask a virtual area of 0x2e000 bytes 00:06:15.196 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:15.196 EAL: Setting up physically contiguous memory... 00:06:15.196 EAL: Setting maximum number of open files to 524288 00:06:15.196 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:15.196 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:15.196 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:15.196 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:15.196 EAL: Ask a virtual area of 0x61000 bytes 00:06:15.196 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:15.196 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:15.196 EAL: Ask a virtual area of 0x400000000 bytes 00:06:15.196 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:15.196 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:15.196 EAL: Hugepages will be freed exactly as allocated. 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: TSC frequency is ~2400000 KHz 00:06:15.196 EAL: Main lcore 0 is ready (tid=7f90b6befa00;cpuset=[0]) 00:06:15.196 EAL: Trying to obtain current memory policy. 00:06:15.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.196 EAL: Restoring previous memory policy: 0 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was expanded by 2MB 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:15.196 EAL: Mem event callback 'spdk:(nil)' registered 00:06:15.196 00:06:15.196 00:06:15.196 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.196 http://cunit.sourceforge.net/ 00:06:15.196 00:06:15.196 00:06:15.196 Suite: components_suite 00:06:15.196 Test: vtophys_malloc_test ...passed 00:06:15.196 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:15.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.196 EAL: Restoring previous memory policy: 4 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was expanded by 4MB 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was shrunk by 4MB 00:06:15.196 EAL: Trying to obtain current memory policy. 00:06:15.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.196 EAL: Restoring previous memory policy: 4 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was expanded by 6MB 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was shrunk by 6MB 00:06:15.196 EAL: Trying to obtain current memory policy. 00:06:15.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.196 EAL: Restoring previous memory policy: 4 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.196 EAL: Heap on socket 0 was expanded by 10MB 00:06:15.196 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.196 EAL: request: mp_malloc_sync 00:06:15.196 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was shrunk by 10MB 00:06:15.197 EAL: Trying to obtain current memory policy. 00:06:15.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.197 EAL: Restoring previous memory policy: 4 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was expanded by 18MB 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was shrunk by 18MB 00:06:15.197 EAL: Trying to obtain current memory policy. 00:06:15.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.197 EAL: Restoring previous memory policy: 4 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was expanded by 34MB 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was shrunk by 34MB 00:06:15.197 EAL: Trying to obtain current memory policy. 00:06:15.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.197 EAL: Restoring previous memory policy: 4 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was expanded by 66MB 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was shrunk by 66MB 00:06:15.197 EAL: Trying to obtain current memory policy. 00:06:15.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.197 EAL: Restoring previous memory policy: 4 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was expanded by 130MB 00:06:15.197 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.197 EAL: request: mp_malloc_sync 00:06:15.197 EAL: No shared files mode enabled, IPC is disabled 00:06:15.197 EAL: Heap on socket 0 was shrunk by 130MB 00:06:15.197 EAL: Trying to obtain current memory policy. 00:06:15.197 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.458 EAL: Restoring previous memory policy: 4 00:06:15.458 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.458 EAL: request: mp_malloc_sync 00:06:15.458 EAL: No shared files mode enabled, IPC is disabled 00:06:15.458 EAL: Heap on socket 0 was expanded by 258MB 00:06:15.458 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.458 EAL: request: mp_malloc_sync 00:06:15.458 EAL: No shared files mode enabled, IPC is disabled 00:06:15.458 EAL: Heap on socket 0 was shrunk by 258MB 00:06:15.458 EAL: Trying to obtain current memory policy. 00:06:15.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.458 EAL: Restoring previous memory policy: 4 00:06:15.458 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.458 EAL: request: mp_malloc_sync 00:06:15.458 EAL: No shared files mode enabled, IPC is disabled 00:06:15.458 EAL: Heap on socket 0 was expanded by 514MB 00:06:15.458 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.458 EAL: request: mp_malloc_sync 00:06:15.458 EAL: No shared files mode enabled, IPC is disabled 00:06:15.458 EAL: Heap on socket 0 was shrunk by 514MB 00:06:15.458 EAL: Trying to obtain current memory policy. 00:06:15.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:15.719 EAL: Restoring previous memory policy: 4 00:06:15.719 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.719 EAL: request: mp_malloc_sync 00:06:15.719 EAL: No shared files mode enabled, IPC is disabled 00:06:15.719 EAL: Heap on socket 0 was expanded by 1026MB 00:06:15.719 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.980 EAL: request: mp_malloc_sync 00:06:15.980 EAL: No shared files mode enabled, IPC is disabled 00:06:15.980 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:15.980 passed 00:06:15.980 00:06:15.980 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.980 suites 1 1 n/a 0 0 00:06:15.980 tests 2 2 2 0 0 00:06:15.980 asserts 497 497 497 0 n/a 00:06:15.980 00:06:15.980 Elapsed time = 0.687 seconds 00:06:15.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.980 EAL: request: mp_malloc_sync 00:06:15.980 EAL: No shared files mode enabled, IPC is disabled 00:06:15.980 EAL: Heap on socket 0 was shrunk by 2MB 00:06:15.980 EAL: No shared files mode enabled, IPC is disabled 00:06:15.980 EAL: No shared files mode enabled, IPC is disabled 00:06:15.980 EAL: No shared files mode enabled, IPC is disabled 00:06:15.980 00:06:15.980 real 0m0.841s 00:06:15.980 user 0m0.452s 00:06:15.980 sys 0m0.359s 00:06:15.980 07:15:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.980 07:15:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 ************************************ 00:06:15.980 END TEST env_vtophys 00:06:15.980 ************************************ 00:06:15.980 07:15:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:15.980 07:15:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.980 07:15:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.980 07:15:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 ************************************ 00:06:15.980 START TEST env_pci 00:06:15.980 ************************************ 00:06:15.980 07:15:43 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:15.980 00:06:15.980 00:06:15.980 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.980 http://cunit.sourceforge.net/ 00:06:15.980 00:06:15.980 00:06:15.980 Suite: pci 00:06:15.980 Test: pci_hook ...[2024-11-26 07:15:43.994818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1216554 has claimed it 00:06:15.980 EAL: Cannot find device (10000:00:01.0) 00:06:15.980 EAL: Failed to attach device on primary process 00:06:15.980 passed 00:06:15.980 00:06:15.980 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.980 suites 1 1 n/a 0 0 00:06:15.980 tests 1 1 1 0 0 00:06:15.980 asserts 25 25 25 0 n/a 00:06:15.980 00:06:15.980 Elapsed time = 0.032 seconds 00:06:15.980 00:06:15.980 real 0m0.054s 00:06:15.980 user 0m0.018s 00:06:15.980 sys 0m0.036s 00:06:15.981 07:15:44 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.981 07:15:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:15.981 ************************************ 00:06:15.981 END TEST env_pci 00:06:15.981 ************************************ 00:06:15.981 07:15:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:15.981 07:15:44 env -- env/env.sh@15 -- # uname 00:06:16.242 07:15:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:16.242 07:15:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:16.242 07:15:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.242 07:15:44 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.242 07:15:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.242 07:15:44 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.242 ************************************ 00:06:16.242 START TEST env_dpdk_post_init 00:06:16.242 ************************************ 00:06:16.242 07:15:44 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:16.242 EAL: Detected CPU lcores: 128 00:06:16.242 EAL: Detected NUMA nodes: 2 00:06:16.242 EAL: Detected shared linkage of DPDK 00:06:16.242 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:16.242 EAL: Selected IOVA mode 'VA' 00:06:16.242 EAL: VFIO support initialized 00:06:16.242 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:16.242 EAL: Using IOMMU type 1 (Type 1) 00:06:16.503 EAL: Ignore mapping IO port bar(1) 00:06:16.503 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:16.764 EAL: Ignore mapping IO port bar(1) 00:06:16.764 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:16.764 EAL: Ignore mapping IO port bar(1) 00:06:17.025 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:17.025 EAL: Ignore mapping IO port bar(1) 00:06:17.285 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:17.285 EAL: Ignore mapping IO port bar(1) 00:06:17.548 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:17.548 EAL: Ignore mapping IO port bar(1) 00:06:17.548 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:17.810 EAL: Ignore mapping IO port bar(1) 00:06:17.810 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:18.071 EAL: Ignore mapping IO port bar(1) 00:06:18.071 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:18.333 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:18.333 EAL: Ignore mapping IO port bar(1) 00:06:18.593 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:18.593 EAL: Ignore mapping IO port bar(1) 00:06:18.855 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:18.855 EAL: Ignore mapping IO port bar(1) 00:06:19.116 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:19.116 EAL: Ignore mapping IO port bar(1) 00:06:19.116 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:19.377 EAL: Ignore mapping IO port bar(1) 00:06:19.377 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:19.637 EAL: Ignore mapping IO port bar(1) 00:06:19.637 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:19.898 EAL: Ignore mapping IO port bar(1) 00:06:19.898 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:19.898 EAL: Ignore mapping IO port bar(1) 00:06:20.159 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:20.159 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:20.159 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:20.160 Starting DPDK initialization... 00:06:20.160 Starting SPDK post initialization... 00:06:20.160 SPDK NVMe probe 00:06:20.160 Attaching to 0000:65:00.0 00:06:20.160 Attached to 0000:65:00.0 00:06:20.160 Cleaning up... 00:06:22.074 00:06:22.074 real 0m5.754s 00:06:22.074 user 0m0.115s 00:06:22.074 sys 0m0.191s 00:06:22.074 07:15:49 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.074 07:15:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.074 ************************************ 00:06:22.074 END TEST env_dpdk_post_init 00:06:22.074 ************************************ 00:06:22.074 07:15:49 env -- env/env.sh@26 -- # uname 00:06:22.074 07:15:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:22.074 07:15:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.074 07:15:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.074 07:15:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.074 07:15:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.074 ************************************ 00:06:22.074 START TEST env_mem_callbacks 00:06:22.074 ************************************ 00:06:22.074 07:15:49 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:22.074 EAL: Detected CPU lcores: 128 00:06:22.074 EAL: Detected NUMA nodes: 2 00:06:22.074 EAL: Detected shared linkage of DPDK 00:06:22.074 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:22.074 EAL: Selected IOVA mode 'VA' 00:06:22.074 EAL: VFIO support initialized 00:06:22.074 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:22.074 00:06:22.074 00:06:22.074 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.074 http://cunit.sourceforge.net/ 00:06:22.074 00:06:22.074 00:06:22.074 Suite: memory 00:06:22.074 Test: test ... 00:06:22.074 register 0x200000200000 2097152 00:06:22.074 malloc 3145728 00:06:22.074 register 0x200000400000 4194304 00:06:22.074 buf 0x200000500000 len 3145728 PASSED 00:06:22.074 malloc 64 00:06:22.074 buf 0x2000004fff40 len 64 PASSED 00:06:22.074 malloc 4194304 00:06:22.074 register 0x200000800000 6291456 00:06:22.074 buf 0x200000a00000 len 4194304 PASSED 00:06:22.074 free 0x200000500000 3145728 00:06:22.074 free 0x2000004fff40 64 00:06:22.074 unregister 0x200000400000 4194304 PASSED 00:06:22.074 free 0x200000a00000 4194304 00:06:22.074 unregister 0x200000800000 6291456 PASSED 00:06:22.074 malloc 8388608 00:06:22.074 register 0x200000400000 10485760 00:06:22.074 buf 0x200000600000 len 8388608 PASSED 00:06:22.074 free 0x200000600000 8388608 00:06:22.074 unregister 0x200000400000 10485760 PASSED 00:06:22.074 passed 00:06:22.074 00:06:22.074 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.074 suites 1 1 n/a 0 0 00:06:22.074 tests 1 1 1 0 0 00:06:22.074 asserts 15 15 15 0 n/a 00:06:22.075 00:06:22.075 Elapsed time = 0.010 seconds 00:06:22.075 00:06:22.075 real 0m0.071s 00:06:22.075 user 0m0.017s 00:06:22.075 sys 0m0.055s 00:06:22.075 07:15:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.075 07:15:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:22.075 ************************************ 00:06:22.075 END TEST env_mem_callbacks 00:06:22.075 ************************************ 00:06:22.075 00:06:22.075 real 0m7.540s 00:06:22.075 user 0m1.058s 00:06:22.075 sys 0m1.038s 00:06:22.075 07:15:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.075 07:15:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.075 ************************************ 00:06:22.075 END TEST env 00:06:22.075 ************************************ 00:06:22.075 07:15:50 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.075 07:15:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.075 07:15:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.075 07:15:50 -- common/autotest_common.sh@10 -- # set +x 00:06:22.075 ************************************ 00:06:22.075 START TEST rpc 00:06:22.075 ************************************ 00:06:22.075 07:15:50 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:22.336 * Looking for test storage... 00:06:22.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.336 07:15:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.336 07:15:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.336 07:15:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.336 07:15:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.336 07:15:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.336 07:15:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:22.336 07:15:50 rpc -- scripts/common.sh@345 -- # : 1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.336 07:15:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.336 07:15:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@353 -- # local d=1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.336 07:15:50 rpc -- scripts/common.sh@355 -- # echo 1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.336 07:15:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@353 -- # local d=2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.336 07:15:50 rpc -- scripts/common.sh@355 -- # echo 2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.336 07:15:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.336 07:15:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.336 07:15:50 rpc -- scripts/common.sh@368 -- # return 0 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.336 --rc genhtml_branch_coverage=1 00:06:22.336 --rc genhtml_function_coverage=1 00:06:22.336 --rc genhtml_legend=1 00:06:22.336 --rc geninfo_all_blocks=1 00:06:22.336 --rc geninfo_unexecuted_blocks=1 00:06:22.336 00:06:22.336 ' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.336 --rc genhtml_branch_coverage=1 00:06:22.336 --rc genhtml_function_coverage=1 00:06:22.336 --rc genhtml_legend=1 00:06:22.336 --rc geninfo_all_blocks=1 00:06:22.336 --rc geninfo_unexecuted_blocks=1 00:06:22.336 00:06:22.336 ' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.336 --rc genhtml_branch_coverage=1 00:06:22.336 --rc genhtml_function_coverage=1 00:06:22.336 --rc genhtml_legend=1 00:06:22.336 --rc geninfo_all_blocks=1 00:06:22.336 --rc geninfo_unexecuted_blocks=1 00:06:22.336 00:06:22.336 ' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.336 --rc genhtml_branch_coverage=1 00:06:22.336 --rc genhtml_function_coverage=1 00:06:22.336 --rc genhtml_legend=1 00:06:22.336 --rc geninfo_all_blocks=1 00:06:22.336 --rc geninfo_unexecuted_blocks=1 00:06:22.336 00:06:22.336 ' 00:06:22.336 07:15:50 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:22.336 07:15:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1217876 00:06:22.336 07:15:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.336 07:15:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1217876 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 1217876 ']' 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.336 07:15:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.336 [2024-11-26 07:15:50.401844] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:22.336 [2024-11-26 07:15:50.401914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217876 ] 00:06:22.598 [2024-11-26 07:15:50.494859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.598 [2024-11-26 07:15:50.548241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:22.598 [2024-11-26 07:15:50.548289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1217876' to capture a snapshot of events at runtime. 00:06:22.598 [2024-11-26 07:15:50.548299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.598 [2024-11-26 07:15:50.548309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.598 [2024-11-26 07:15:50.548316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1217876 for offline analysis/debug. 00:06:22.598 [2024-11-26 07:15:50.549056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.171 07:15:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.171 07:15:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.171 07:15:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.171 07:15:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:23.171 07:15:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:23.171 07:15:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:23.171 07:15:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.171 07:15:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.171 07:15:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.171 ************************************ 00:06:23.171 START TEST rpc_integrity 00:06:23.171 ************************************ 00:06:23.171 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:23.171 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.171 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.171 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.432 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.432 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.432 { 00:06:23.432 "name": "Malloc0", 00:06:23.432 "aliases": [ 00:06:23.432 "ee3dd411-d0c8-49cb-8ce2-4c5451946448" 00:06:23.432 ], 00:06:23.432 "product_name": "Malloc disk", 00:06:23.432 "block_size": 512, 00:06:23.432 "num_blocks": 16384, 00:06:23.432 "uuid": "ee3dd411-d0c8-49cb-8ce2-4c5451946448", 00:06:23.432 "assigned_rate_limits": { 00:06:23.432 "rw_ios_per_sec": 0, 00:06:23.432 "rw_mbytes_per_sec": 0, 00:06:23.432 "r_mbytes_per_sec": 0, 00:06:23.432 "w_mbytes_per_sec": 0 00:06:23.432 }, 00:06:23.432 "claimed": false, 00:06:23.432 "zoned": false, 00:06:23.432 "supported_io_types": { 00:06:23.432 "read": true, 00:06:23.432 "write": true, 00:06:23.432 "unmap": true, 00:06:23.432 "flush": true, 00:06:23.432 "reset": true, 00:06:23.432 "nvme_admin": false, 00:06:23.432 "nvme_io": false, 00:06:23.432 "nvme_io_md": false, 00:06:23.432 "write_zeroes": true, 00:06:23.432 "zcopy": true, 00:06:23.432 "get_zone_info": false, 00:06:23.432 "zone_management": false, 00:06:23.432 "zone_append": false, 00:06:23.432 "compare": false, 00:06:23.432 "compare_and_write": false, 00:06:23.432 "abort": true, 00:06:23.432 "seek_hole": false, 00:06:23.432 "seek_data": false, 00:06:23.432 "copy": true, 00:06:23.432 "nvme_iov_md": false 00:06:23.432 }, 00:06:23.432 "memory_domains": [ 00:06:23.432 { 00:06:23.432 "dma_device_id": "system", 00:06:23.432 "dma_device_type": 1 00:06:23.432 }, 00:06:23.432 { 00:06:23.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.433 "dma_device_type": 2 00:06:23.433 } 00:06:23.433 ], 00:06:23.433 "driver_specific": {} 00:06:23.433 } 00:06:23.433 ]' 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 [2024-11-26 07:15:51.399441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:23.433 [2024-11-26 07:15:51.399485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.433 [2024-11-26 07:15:51.399502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11cfdb0 00:06:23.433 [2024-11-26 07:15:51.399509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.433 [2024-11-26 07:15:51.401067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.433 [2024-11-26 07:15:51.401103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.433 Passthru0 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.433 { 00:06:23.433 "name": "Malloc0", 00:06:23.433 "aliases": [ 00:06:23.433 "ee3dd411-d0c8-49cb-8ce2-4c5451946448" 00:06:23.433 ], 00:06:23.433 "product_name": "Malloc disk", 00:06:23.433 "block_size": 512, 00:06:23.433 "num_blocks": 16384, 00:06:23.433 "uuid": "ee3dd411-d0c8-49cb-8ce2-4c5451946448", 00:06:23.433 "assigned_rate_limits": { 00:06:23.433 "rw_ios_per_sec": 0, 00:06:23.433 "rw_mbytes_per_sec": 0, 00:06:23.433 "r_mbytes_per_sec": 0, 00:06:23.433 "w_mbytes_per_sec": 0 00:06:23.433 }, 00:06:23.433 "claimed": true, 00:06:23.433 "claim_type": "exclusive_write", 00:06:23.433 "zoned": false, 00:06:23.433 "supported_io_types": { 00:06:23.433 "read": true, 00:06:23.433 "write": true, 00:06:23.433 "unmap": true, 00:06:23.433 "flush": true, 00:06:23.433 "reset": true, 00:06:23.433 "nvme_admin": false, 00:06:23.433 "nvme_io": false, 00:06:23.433 "nvme_io_md": false, 00:06:23.433 "write_zeroes": true, 00:06:23.433 "zcopy": true, 00:06:23.433 "get_zone_info": false, 00:06:23.433 "zone_management": false, 00:06:23.433 "zone_append": false, 00:06:23.433 "compare": false, 00:06:23.433 "compare_and_write": false, 00:06:23.433 "abort": true, 00:06:23.433 "seek_hole": false, 00:06:23.433 "seek_data": false, 00:06:23.433 "copy": true, 00:06:23.433 "nvme_iov_md": false 00:06:23.433 }, 00:06:23.433 "memory_domains": [ 00:06:23.433 { 00:06:23.433 "dma_device_id": "system", 00:06:23.433 "dma_device_type": 1 00:06:23.433 }, 00:06:23.433 { 00:06:23.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.433 "dma_device_type": 2 00:06:23.433 } 00:06:23.433 ], 00:06:23.433 "driver_specific": {} 00:06:23.433 }, 00:06:23.433 { 00:06:23.433 "name": "Passthru0", 00:06:23.433 "aliases": [ 00:06:23.433 "70060fe9-a7e2-55d3-b225-bc544323b847" 00:06:23.433 ], 00:06:23.433 "product_name": "passthru", 00:06:23.433 "block_size": 512, 00:06:23.433 "num_blocks": 16384, 00:06:23.433 "uuid": "70060fe9-a7e2-55d3-b225-bc544323b847", 00:06:23.433 "assigned_rate_limits": { 00:06:23.433 "rw_ios_per_sec": 0, 00:06:23.433 "rw_mbytes_per_sec": 0, 00:06:23.433 "r_mbytes_per_sec": 0, 00:06:23.433 "w_mbytes_per_sec": 0 00:06:23.433 }, 00:06:23.433 "claimed": false, 00:06:23.433 "zoned": false, 00:06:23.433 "supported_io_types": { 00:06:23.433 "read": true, 00:06:23.433 "write": true, 00:06:23.433 "unmap": true, 00:06:23.433 "flush": true, 00:06:23.433 "reset": true, 00:06:23.433 "nvme_admin": false, 00:06:23.433 "nvme_io": false, 00:06:23.433 "nvme_io_md": false, 00:06:23.433 "write_zeroes": true, 00:06:23.433 "zcopy": true, 00:06:23.433 "get_zone_info": false, 00:06:23.433 "zone_management": false, 00:06:23.433 "zone_append": false, 00:06:23.433 "compare": false, 00:06:23.433 "compare_and_write": false, 00:06:23.433 "abort": true, 00:06:23.433 "seek_hole": false, 00:06:23.433 "seek_data": false, 00:06:23.433 "copy": true, 00:06:23.433 "nvme_iov_md": false 00:06:23.433 }, 00:06:23.433 "memory_domains": [ 00:06:23.433 { 00:06:23.433 "dma_device_id": "system", 00:06:23.433 "dma_device_type": 1 00:06:23.433 }, 00:06:23.433 { 00:06:23.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.433 "dma_device_type": 2 00:06:23.433 } 00:06:23.433 ], 00:06:23.433 "driver_specific": { 00:06:23.433 "passthru": { 00:06:23.433 "name": "Passthru0", 00:06:23.433 "base_bdev_name": "Malloc0" 00:06:23.433 } 00:06:23.433 } 00:06:23.433 } 00:06:23.433 ]' 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.433 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.433 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.694 07:15:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.694 00:06:23.694 real 0m0.300s 00:06:23.694 user 0m0.178s 00:06:23.694 sys 0m0.051s 00:06:23.694 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.694 07:15:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.694 ************************************ 00:06:23.694 END TEST rpc_integrity 00:06:23.694 ************************************ 00:06:23.694 07:15:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:23.694 07:15:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.694 07:15:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.694 07:15:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.694 ************************************ 00:06:23.694 START TEST rpc_plugins 00:06:23.695 ************************************ 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:23.695 { 00:06:23.695 "name": "Malloc1", 00:06:23.695 "aliases": [ 00:06:23.695 "25acabf3-edf7-4e51-943d-ef54b27d2928" 00:06:23.695 ], 00:06:23.695 "product_name": "Malloc disk", 00:06:23.695 "block_size": 4096, 00:06:23.695 "num_blocks": 256, 00:06:23.695 "uuid": "25acabf3-edf7-4e51-943d-ef54b27d2928", 00:06:23.695 "assigned_rate_limits": { 00:06:23.695 "rw_ios_per_sec": 0, 00:06:23.695 "rw_mbytes_per_sec": 0, 00:06:23.695 "r_mbytes_per_sec": 0, 00:06:23.695 "w_mbytes_per_sec": 0 00:06:23.695 }, 00:06:23.695 "claimed": false, 00:06:23.695 "zoned": false, 00:06:23.695 "supported_io_types": { 00:06:23.695 "read": true, 00:06:23.695 "write": true, 00:06:23.695 "unmap": true, 00:06:23.695 "flush": true, 00:06:23.695 "reset": true, 00:06:23.695 "nvme_admin": false, 00:06:23.695 "nvme_io": false, 00:06:23.695 "nvme_io_md": false, 00:06:23.695 "write_zeroes": true, 00:06:23.695 "zcopy": true, 00:06:23.695 "get_zone_info": false, 00:06:23.695 "zone_management": false, 00:06:23.695 "zone_append": false, 00:06:23.695 "compare": false, 00:06:23.695 "compare_and_write": false, 00:06:23.695 "abort": true, 00:06:23.695 "seek_hole": false, 00:06:23.695 "seek_data": false, 00:06:23.695 "copy": true, 00:06:23.695 "nvme_iov_md": false 00:06:23.695 }, 00:06:23.695 "memory_domains": [ 00:06:23.695 { 00:06:23.695 "dma_device_id": "system", 00:06:23.695 "dma_device_type": 1 00:06:23.695 }, 00:06:23.695 { 00:06:23.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.695 "dma_device_type": 2 00:06:23.695 } 00:06:23.695 ], 00:06:23.695 "driver_specific": {} 00:06:23.695 } 00:06:23.695 ]' 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.695 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:23.695 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:23.957 07:15:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:23.957 00:06:23.957 real 0m0.152s 00:06:23.957 user 0m0.095s 00:06:23.957 sys 0m0.022s 00:06:23.957 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.957 07:15:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:23.957 ************************************ 00:06:23.957 END TEST rpc_plugins 00:06:23.957 ************************************ 00:06:23.957 07:15:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:23.957 07:15:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.957 07:15:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.957 07:15:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.957 ************************************ 00:06:23.957 START TEST rpc_trace_cmd_test 00:06:23.957 ************************************ 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.957 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:23.957 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1217876", 00:06:23.957 "tpoint_group_mask": "0x8", 00:06:23.957 "iscsi_conn": { 00:06:23.957 "mask": "0x2", 00:06:23.957 "tpoint_mask": "0x0" 00:06:23.957 }, 00:06:23.957 "scsi": { 00:06:23.957 "mask": "0x4", 00:06:23.957 "tpoint_mask": "0x0" 00:06:23.957 }, 00:06:23.957 "bdev": { 00:06:23.957 "mask": "0x8", 00:06:23.957 "tpoint_mask": "0xffffffffffffffff" 00:06:23.957 }, 00:06:23.957 "nvmf_rdma": { 00:06:23.957 "mask": "0x10", 00:06:23.957 "tpoint_mask": "0x0" 00:06:23.957 }, 00:06:23.957 "nvmf_tcp": { 00:06:23.957 "mask": "0x20", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "ftl": { 00:06:23.958 "mask": "0x40", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "blobfs": { 00:06:23.958 "mask": "0x80", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "dsa": { 00:06:23.958 "mask": "0x200", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "thread": { 00:06:23.958 "mask": "0x400", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "nvme_pcie": { 00:06:23.958 "mask": "0x800", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "iaa": { 00:06:23.958 "mask": "0x1000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "nvme_tcp": { 00:06:23.958 "mask": "0x2000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "bdev_nvme": { 00:06:23.958 "mask": "0x4000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "sock": { 00:06:23.958 "mask": "0x8000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "blob": { 00:06:23.958 "mask": "0x10000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "bdev_raid": { 00:06:23.958 "mask": "0x20000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 }, 00:06:23.958 "scheduler": { 00:06:23.958 "mask": "0x40000", 00:06:23.958 "tpoint_mask": "0x0" 00:06:23.958 } 00:06:23.958 }' 00:06:23.958 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:23.958 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:23.958 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:23.958 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:23.958 07:15:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:23.958 07:15:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:23.958 07:15:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:24.220 07:15:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:24.220 07:15:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:24.220 07:15:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:24.220 00:06:24.220 real 0m0.232s 00:06:24.220 user 0m0.195s 00:06:24.220 sys 0m0.031s 00:06:24.220 07:15:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.220 07:15:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 ************************************ 00:06:24.220 END TEST rpc_trace_cmd_test 00:06:24.220 ************************************ 00:06:24.220 07:15:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:24.220 07:15:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:24.220 07:15:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:24.220 07:15:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.220 07:15:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.220 07:15:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 ************************************ 00:06:24.220 START TEST rpc_daemon_integrity 00:06:24.220 ************************************ 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:24.220 { 00:06:24.220 "name": "Malloc2", 00:06:24.220 "aliases": [ 00:06:24.220 "18b93dca-27a7-4d35-9504-33684a82a167" 00:06:24.220 ], 00:06:24.220 "product_name": "Malloc disk", 00:06:24.220 "block_size": 512, 00:06:24.220 "num_blocks": 16384, 00:06:24.220 "uuid": "18b93dca-27a7-4d35-9504-33684a82a167", 00:06:24.220 "assigned_rate_limits": { 00:06:24.220 "rw_ios_per_sec": 0, 00:06:24.220 "rw_mbytes_per_sec": 0, 00:06:24.220 "r_mbytes_per_sec": 0, 00:06:24.220 "w_mbytes_per_sec": 0 00:06:24.220 }, 00:06:24.220 "claimed": false, 00:06:24.220 "zoned": false, 00:06:24.220 "supported_io_types": { 00:06:24.220 "read": true, 00:06:24.220 "write": true, 00:06:24.220 "unmap": true, 00:06:24.220 "flush": true, 00:06:24.220 "reset": true, 00:06:24.220 "nvme_admin": false, 00:06:24.220 "nvme_io": false, 00:06:24.220 "nvme_io_md": false, 00:06:24.220 "write_zeroes": true, 00:06:24.220 "zcopy": true, 00:06:24.220 "get_zone_info": false, 00:06:24.220 "zone_management": false, 00:06:24.220 "zone_append": false, 00:06:24.220 "compare": false, 00:06:24.220 "compare_and_write": false, 00:06:24.220 "abort": true, 00:06:24.220 "seek_hole": false, 00:06:24.220 "seek_data": false, 00:06:24.220 "copy": true, 00:06:24.220 "nvme_iov_md": false 00:06:24.220 }, 00:06:24.220 "memory_domains": [ 00:06:24.220 { 00:06:24.220 "dma_device_id": "system", 00:06:24.220 "dma_device_type": 1 00:06:24.220 }, 00:06:24.220 { 00:06:24.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.220 "dma_device_type": 2 00:06:24.220 } 00:06:24.220 ], 00:06:24.220 "driver_specific": {} 00:06:24.220 } 00:06:24.220 ]' 00:06:24.220 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 [2024-11-26 07:15:52.329981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:24.482 [2024-11-26 07:15:52.330035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:24.482 [2024-11-26 07:15:52.330051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13008d0 00:06:24.482 [2024-11-26 07:15:52.330064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:24.482 [2024-11-26 07:15:52.331527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:24.482 [2024-11-26 07:15:52.331563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:24.482 Passthru0 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:24.482 { 00:06:24.482 "name": "Malloc2", 00:06:24.482 "aliases": [ 00:06:24.482 "18b93dca-27a7-4d35-9504-33684a82a167" 00:06:24.482 ], 00:06:24.482 "product_name": "Malloc disk", 00:06:24.482 "block_size": 512, 00:06:24.482 "num_blocks": 16384, 00:06:24.482 "uuid": "18b93dca-27a7-4d35-9504-33684a82a167", 00:06:24.482 "assigned_rate_limits": { 00:06:24.482 "rw_ios_per_sec": 0, 00:06:24.482 "rw_mbytes_per_sec": 0, 00:06:24.482 "r_mbytes_per_sec": 0, 00:06:24.482 "w_mbytes_per_sec": 0 00:06:24.482 }, 00:06:24.482 "claimed": true, 00:06:24.482 "claim_type": "exclusive_write", 00:06:24.482 "zoned": false, 00:06:24.482 "supported_io_types": { 00:06:24.482 "read": true, 00:06:24.482 "write": true, 00:06:24.482 "unmap": true, 00:06:24.482 "flush": true, 00:06:24.482 "reset": true, 00:06:24.482 "nvme_admin": false, 00:06:24.482 "nvme_io": false, 00:06:24.482 "nvme_io_md": false, 00:06:24.482 "write_zeroes": true, 00:06:24.482 "zcopy": true, 00:06:24.482 "get_zone_info": false, 00:06:24.482 "zone_management": false, 00:06:24.482 "zone_append": false, 00:06:24.482 "compare": false, 00:06:24.482 "compare_and_write": false, 00:06:24.482 "abort": true, 00:06:24.482 "seek_hole": false, 00:06:24.482 "seek_data": false, 00:06:24.482 "copy": true, 00:06:24.482 "nvme_iov_md": false 00:06:24.482 }, 00:06:24.482 "memory_domains": [ 00:06:24.482 { 00:06:24.482 "dma_device_id": "system", 00:06:24.482 "dma_device_type": 1 00:06:24.482 }, 00:06:24.482 { 00:06:24.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.482 "dma_device_type": 2 00:06:24.482 } 00:06:24.482 ], 00:06:24.482 "driver_specific": {} 00:06:24.482 }, 00:06:24.482 { 00:06:24.482 "name": "Passthru0", 00:06:24.482 "aliases": [ 00:06:24.482 "16882872-fcef-5ac5-bce1-0c06454c66c1" 00:06:24.482 ], 00:06:24.482 "product_name": "passthru", 00:06:24.482 "block_size": 512, 00:06:24.482 "num_blocks": 16384, 00:06:24.482 "uuid": "16882872-fcef-5ac5-bce1-0c06454c66c1", 00:06:24.482 "assigned_rate_limits": { 00:06:24.482 "rw_ios_per_sec": 0, 00:06:24.482 "rw_mbytes_per_sec": 0, 00:06:24.482 "r_mbytes_per_sec": 0, 00:06:24.482 "w_mbytes_per_sec": 0 00:06:24.482 }, 00:06:24.482 "claimed": false, 00:06:24.482 "zoned": false, 00:06:24.482 "supported_io_types": { 00:06:24.482 "read": true, 00:06:24.482 "write": true, 00:06:24.482 "unmap": true, 00:06:24.482 "flush": true, 00:06:24.482 "reset": true, 00:06:24.482 "nvme_admin": false, 00:06:24.482 "nvme_io": false, 00:06:24.482 "nvme_io_md": false, 00:06:24.482 "write_zeroes": true, 00:06:24.482 "zcopy": true, 00:06:24.482 "get_zone_info": false, 00:06:24.482 "zone_management": false, 00:06:24.482 "zone_append": false, 00:06:24.482 "compare": false, 00:06:24.482 "compare_and_write": false, 00:06:24.482 "abort": true, 00:06:24.482 "seek_hole": false, 00:06:24.482 "seek_data": false, 00:06:24.482 "copy": true, 00:06:24.482 "nvme_iov_md": false 00:06:24.482 }, 00:06:24.482 "memory_domains": [ 00:06:24.482 { 00:06:24.482 "dma_device_id": "system", 00:06:24.482 "dma_device_type": 1 00:06:24.482 }, 00:06:24.482 { 00:06:24.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:24.482 "dma_device_type": 2 00:06:24.482 } 00:06:24.482 ], 00:06:24.482 "driver_specific": { 00:06:24.482 "passthru": { 00:06:24.482 "name": "Passthru0", 00:06:24.482 "base_bdev_name": "Malloc2" 00:06:24.482 } 00:06:24.482 } 00:06:24.482 } 00:06:24.482 ]' 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:24.482 00:06:24.482 real 0m0.302s 00:06:24.482 user 0m0.180s 00:06:24.482 sys 0m0.051s 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.482 07:15:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:24.482 ************************************ 00:06:24.482 END TEST rpc_daemon_integrity 00:06:24.482 ************************************ 00:06:24.482 07:15:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:24.482 07:15:52 rpc -- rpc/rpc.sh@84 -- # killprocess 1217876 00:06:24.482 07:15:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 1217876 ']' 00:06:24.482 07:15:52 rpc -- common/autotest_common.sh@958 -- # kill -0 1217876 00:06:24.482 07:15:52 rpc -- common/autotest_common.sh@959 -- # uname 00:06:24.482 07:15:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.482 07:15:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217876 00:06:24.744 07:15:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.744 07:15:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.744 07:15:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217876' 00:06:24.744 killing process with pid 1217876 00:06:24.744 07:15:52 rpc -- common/autotest_common.sh@973 -- # kill 1217876 00:06:24.744 07:15:52 rpc -- common/autotest_common.sh@978 -- # wait 1217876 00:06:24.744 00:06:24.744 real 0m2.685s 00:06:24.744 user 0m3.393s 00:06:24.744 sys 0m0.844s 00:06:25.005 07:15:52 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.005 07:15:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.005 ************************************ 00:06:25.005 END TEST rpc 00:06:25.005 ************************************ 00:06:25.005 07:15:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:25.005 07:15:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.005 07:15:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.005 07:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:25.005 ************************************ 00:06:25.005 START TEST skip_rpc 00:06:25.005 ************************************ 00:06:25.005 07:15:52 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:25.005 * Looking for test storage... 00:06:25.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.005 07:15:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.005 07:15:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.005 07:15:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.005 07:15:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.266 07:15:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.267 07:15:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.267 --rc genhtml_branch_coverage=1 00:06:25.267 --rc genhtml_function_coverage=1 00:06:25.267 --rc genhtml_legend=1 00:06:25.267 --rc geninfo_all_blocks=1 00:06:25.267 --rc geninfo_unexecuted_blocks=1 00:06:25.267 00:06:25.267 ' 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.267 --rc genhtml_branch_coverage=1 00:06:25.267 --rc genhtml_function_coverage=1 00:06:25.267 --rc genhtml_legend=1 00:06:25.267 --rc geninfo_all_blocks=1 00:06:25.267 --rc geninfo_unexecuted_blocks=1 00:06:25.267 00:06:25.267 ' 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.267 --rc genhtml_branch_coverage=1 00:06:25.267 --rc genhtml_function_coverage=1 00:06:25.267 --rc genhtml_legend=1 00:06:25.267 --rc geninfo_all_blocks=1 00:06:25.267 --rc geninfo_unexecuted_blocks=1 00:06:25.267 00:06:25.267 ' 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.267 --rc genhtml_branch_coverage=1 00:06:25.267 --rc genhtml_function_coverage=1 00:06:25.267 --rc genhtml_legend=1 00:06:25.267 --rc geninfo_all_blocks=1 00:06:25.267 --rc geninfo_unexecuted_blocks=1 00:06:25.267 00:06:25.267 ' 00:06:25.267 07:15:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:25.267 07:15:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:25.267 07:15:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.267 07:15:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.267 ************************************ 00:06:25.267 START TEST skip_rpc 00:06:25.267 ************************************ 00:06:25.267 07:15:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:25.267 07:15:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1218730 00:06:25.267 07:15:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.267 07:15:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:25.267 07:15:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:25.267 [2024-11-26 07:15:53.214980] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:25.267 [2024-11-26 07:15:53.215039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218730 ] 00:06:25.267 [2024-11-26 07:15:53.309553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.527 [2024-11-26 07:15:53.361713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1218730 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1218730 ']' 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1218730 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1218730 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1218730' 00:06:30.817 killing process with pid 1218730 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1218730 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1218730 00:06:30.817 00:06:30.817 real 0m5.266s 00:06:30.817 user 0m5.025s 00:06:30.817 sys 0m0.291s 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.817 07:15:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 ************************************ 00:06:30.817 END TEST skip_rpc 00:06:30.817 ************************************ 00:06:30.817 07:15:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:30.817 07:15:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.817 07:15:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.817 07:15:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 ************************************ 00:06:30.817 START TEST skip_rpc_with_json 00:06:30.817 ************************************ 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1219766 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1219766 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1219766 ']' 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 [2024-11-26 07:15:58.553644] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:30.817 [2024-11-26 07:15:58.553692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219766 ] 00:06:30.817 [2024-11-26 07:15:58.613229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.817 [2024-11-26 07:15:58.643205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.817 [2024-11-26 07:15:58.822163] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:30.817 request: 00:06:30.817 { 00:06:30.817 "trtype": "tcp", 00:06:30.817 "method": "nvmf_get_transports", 00:06:30.817 "req_id": 1 00:06:30.817 } 00:06:30.817 Got JSON-RPC error response 00:06:30.817 response: 00:06:30.817 { 00:06:30.817 "code": -19, 00:06:30.817 "message": "No such device" 00:06:30.817 } 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:30.817 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.818 [2024-11-26 07:15:58.834259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.818 07:15:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:31.079 { 00:06:31.079 "subsystems": [ 00:06:31.079 { 00:06:31.079 "subsystem": "fsdev", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "fsdev_set_opts", 00:06:31.079 "params": { 00:06:31.079 "fsdev_io_pool_size": 65535, 00:06:31.079 "fsdev_io_cache_size": 256 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "vfio_user_target", 00:06:31.079 "config": null 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "keyring", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "iobuf", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "iobuf_set_options", 00:06:31.079 "params": { 00:06:31.079 "small_pool_count": 8192, 00:06:31.079 "large_pool_count": 1024, 00:06:31.079 "small_bufsize": 8192, 00:06:31.079 "large_bufsize": 135168, 00:06:31.079 "enable_numa": false 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "sock", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "sock_set_default_impl", 00:06:31.079 "params": { 00:06:31.079 "impl_name": "posix" 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "sock_impl_set_options", 00:06:31.079 "params": { 00:06:31.079 "impl_name": "ssl", 00:06:31.079 "recv_buf_size": 4096, 00:06:31.079 "send_buf_size": 4096, 00:06:31.079 "enable_recv_pipe": true, 00:06:31.079 "enable_quickack": false, 00:06:31.079 "enable_placement_id": 0, 00:06:31.079 "enable_zerocopy_send_server": true, 00:06:31.079 "enable_zerocopy_send_client": false, 00:06:31.079 "zerocopy_threshold": 0, 00:06:31.079 "tls_version": 0, 00:06:31.079 "enable_ktls": false 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "sock_impl_set_options", 00:06:31.079 "params": { 00:06:31.079 "impl_name": "posix", 00:06:31.079 "recv_buf_size": 2097152, 00:06:31.079 "send_buf_size": 2097152, 00:06:31.079 "enable_recv_pipe": true, 00:06:31.079 "enable_quickack": false, 00:06:31.079 "enable_placement_id": 0, 00:06:31.079 "enable_zerocopy_send_server": true, 00:06:31.079 "enable_zerocopy_send_client": false, 00:06:31.079 "zerocopy_threshold": 0, 00:06:31.079 "tls_version": 0, 00:06:31.079 "enable_ktls": false 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "vmd", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "accel", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "accel_set_options", 00:06:31.079 "params": { 00:06:31.079 "small_cache_size": 128, 00:06:31.079 "large_cache_size": 16, 00:06:31.079 "task_count": 2048, 00:06:31.079 "sequence_count": 2048, 00:06:31.079 "buf_count": 2048 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "bdev", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "bdev_set_options", 00:06:31.079 "params": { 00:06:31.079 "bdev_io_pool_size": 65535, 00:06:31.079 "bdev_io_cache_size": 256, 00:06:31.079 "bdev_auto_examine": true, 00:06:31.079 "iobuf_small_cache_size": 128, 00:06:31.079 "iobuf_large_cache_size": 16 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "bdev_raid_set_options", 00:06:31.079 "params": { 00:06:31.079 "process_window_size_kb": 1024, 00:06:31.079 "process_max_bandwidth_mb_sec": 0 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "bdev_iscsi_set_options", 00:06:31.079 "params": { 00:06:31.079 "timeout_sec": 30 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "bdev_nvme_set_options", 00:06:31.079 "params": { 00:06:31.079 "action_on_timeout": "none", 00:06:31.079 "timeout_us": 0, 00:06:31.079 "timeout_admin_us": 0, 00:06:31.079 "keep_alive_timeout_ms": 10000, 00:06:31.079 "arbitration_burst": 0, 00:06:31.079 "low_priority_weight": 0, 00:06:31.079 "medium_priority_weight": 0, 00:06:31.079 "high_priority_weight": 0, 00:06:31.079 "nvme_adminq_poll_period_us": 10000, 00:06:31.079 "nvme_ioq_poll_period_us": 0, 00:06:31.079 "io_queue_requests": 0, 00:06:31.079 "delay_cmd_submit": true, 00:06:31.079 "transport_retry_count": 4, 00:06:31.079 "bdev_retry_count": 3, 00:06:31.079 "transport_ack_timeout": 0, 00:06:31.079 "ctrlr_loss_timeout_sec": 0, 00:06:31.079 "reconnect_delay_sec": 0, 00:06:31.079 "fast_io_fail_timeout_sec": 0, 00:06:31.079 "disable_auto_failback": false, 00:06:31.079 "generate_uuids": false, 00:06:31.079 "transport_tos": 0, 00:06:31.079 "nvme_error_stat": false, 00:06:31.079 "rdma_srq_size": 0, 00:06:31.079 "io_path_stat": false, 00:06:31.079 "allow_accel_sequence": false, 00:06:31.079 "rdma_max_cq_size": 0, 00:06:31.079 "rdma_cm_event_timeout_ms": 0, 00:06:31.079 "dhchap_digests": [ 00:06:31.079 "sha256", 00:06:31.079 "sha384", 00:06:31.079 "sha512" 00:06:31.079 ], 00:06:31.079 "dhchap_dhgroups": [ 00:06:31.079 "null", 00:06:31.079 "ffdhe2048", 00:06:31.079 "ffdhe3072", 00:06:31.079 "ffdhe4096", 00:06:31.079 "ffdhe6144", 00:06:31.079 "ffdhe8192" 00:06:31.079 ] 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "bdev_nvme_set_hotplug", 00:06:31.079 "params": { 00:06:31.079 "period_us": 100000, 00:06:31.079 "enable": false 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "bdev_wait_for_examine" 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "scsi", 00:06:31.079 "config": null 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "scheduler", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "framework_set_scheduler", 00:06:31.079 "params": { 00:06:31.079 "name": "static" 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "vhost_scsi", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "vhost_blk", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "ublk", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "nbd", 00:06:31.079 "config": [] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "nvmf", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "nvmf_set_config", 00:06:31.079 "params": { 00:06:31.079 "discovery_filter": "match_any", 00:06:31.079 "admin_cmd_passthru": { 00:06:31.079 "identify_ctrlr": false 00:06:31.079 }, 00:06:31.079 "dhchap_digests": [ 00:06:31.079 "sha256", 00:06:31.079 "sha384", 00:06:31.079 "sha512" 00:06:31.079 ], 00:06:31.079 "dhchap_dhgroups": [ 00:06:31.079 "null", 00:06:31.079 "ffdhe2048", 00:06:31.079 "ffdhe3072", 00:06:31.079 "ffdhe4096", 00:06:31.079 "ffdhe6144", 00:06:31.079 "ffdhe8192" 00:06:31.079 ] 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "nvmf_set_max_subsystems", 00:06:31.079 "params": { 00:06:31.079 "max_subsystems": 1024 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "nvmf_set_crdt", 00:06:31.079 "params": { 00:06:31.079 "crdt1": 0, 00:06:31.079 "crdt2": 0, 00:06:31.079 "crdt3": 0 00:06:31.079 } 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "method": "nvmf_create_transport", 00:06:31.079 "params": { 00:06:31.079 "trtype": "TCP", 00:06:31.079 "max_queue_depth": 128, 00:06:31.079 "max_io_qpairs_per_ctrlr": 127, 00:06:31.079 "in_capsule_data_size": 4096, 00:06:31.079 "max_io_size": 131072, 00:06:31.079 "io_unit_size": 131072, 00:06:31.079 "max_aq_depth": 128, 00:06:31.079 "num_shared_buffers": 511, 00:06:31.079 "buf_cache_size": 4294967295, 00:06:31.079 "dif_insert_or_strip": false, 00:06:31.079 "zcopy": false, 00:06:31.079 "c2h_success": true, 00:06:31.079 "sock_priority": 0, 00:06:31.079 "abort_timeout_sec": 1, 00:06:31.079 "ack_timeout": 0, 00:06:31.079 "data_wr_pool_size": 0 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 }, 00:06:31.079 { 00:06:31.079 "subsystem": "iscsi", 00:06:31.079 "config": [ 00:06:31.079 { 00:06:31.079 "method": "iscsi_set_options", 00:06:31.079 "params": { 00:06:31.079 "node_base": "iqn.2016-06.io.spdk", 00:06:31.079 "max_sessions": 128, 00:06:31.079 "max_connections_per_session": 2, 00:06:31.079 "max_queue_depth": 64, 00:06:31.079 "default_time2wait": 2, 00:06:31.079 "default_time2retain": 20, 00:06:31.079 "first_burst_length": 8192, 00:06:31.079 "immediate_data": true, 00:06:31.079 "allow_duplicated_isid": false, 00:06:31.079 "error_recovery_level": 0, 00:06:31.079 "nop_timeout": 60, 00:06:31.079 "nop_in_interval": 30, 00:06:31.079 "disable_chap": false, 00:06:31.079 "require_chap": false, 00:06:31.079 "mutual_chap": false, 00:06:31.079 "chap_group": 0, 00:06:31.079 "max_large_datain_per_connection": 64, 00:06:31.079 "max_r2t_per_connection": 4, 00:06:31.079 "pdu_pool_size": 36864, 00:06:31.079 "immediate_data_pool_size": 16384, 00:06:31.079 "data_out_pool_size": 2048 00:06:31.079 } 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 } 00:06:31.079 ] 00:06:31.079 } 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1219766 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1219766 ']' 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1219766 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1219766 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1219766' 00:06:31.079 killing process with pid 1219766 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1219766 00:06:31.079 07:15:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1219766 00:06:31.340 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1219878 00:06:31.340 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:31.340 07:15:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1219878 ']' 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1219878' 00:06:36.629 killing process with pid 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1219878 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:36.629 00:06:36.629 real 0m6.026s 00:06:36.629 user 0m5.878s 00:06:36.629 sys 0m0.508s 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 ************************************ 00:06:36.629 END TEST skip_rpc_with_json 00:06:36.629 ************************************ 00:06:36.629 07:16:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:36.629 07:16:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.629 07:16:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.629 07:16:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 ************************************ 00:06:36.629 START TEST skip_rpc_with_delay 00:06:36.629 ************************************ 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:36.629 [2024-11-26 07:16:04.667409] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.629 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.629 00:06:36.629 real 0m0.082s 00:06:36.629 user 0m0.046s 00:06:36.629 sys 0m0.035s 00:06:36.630 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.630 07:16:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:36.630 ************************************ 00:06:36.630 END TEST skip_rpc_with_delay 00:06:36.630 ************************************ 00:06:36.907 07:16:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:36.907 07:16:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:36.907 07:16:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:36.907 07:16:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.907 07:16:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.907 07:16:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.907 ************************************ 00:06:36.907 START TEST exit_on_failed_rpc_init 00:06:36.907 ************************************ 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1221162 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1221162 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1221162 ']' 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.907 07:16:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.907 [2024-11-26 07:16:04.832169] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:36.907 [2024-11-26 07:16:04.832229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221162 ] 00:06:36.907 [2024-11-26 07:16:04.918799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.907 [2024-11-26 07:16:04.953446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.581 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:37.582 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:37.843 [2024-11-26 07:16:05.679478] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:37.843 [2024-11-26 07:16:05.679532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221186 ] 00:06:37.843 [2024-11-26 07:16:05.765750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.843 [2024-11-26 07:16:05.801642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.843 [2024-11-26 07:16:05.801689] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:37.843 [2024-11-26 07:16:05.801699] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:37.843 [2024-11-26 07:16:05.801707] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1221162 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1221162 ']' 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1221162 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221162 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221162' 00:06:37.843 killing process with pid 1221162 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1221162 00:06:37.843 07:16:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1221162 00:06:38.105 00:06:38.105 real 0m1.317s 00:06:38.105 user 0m1.551s 00:06:38.105 sys 0m0.375s 00:06:38.105 07:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.105 07:16:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 ************************************ 00:06:38.105 END TEST exit_on_failed_rpc_init 00:06:38.105 ************************************ 00:06:38.105 07:16:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:38.105 00:06:38.105 real 0m13.217s 00:06:38.105 user 0m12.734s 00:06:38.105 sys 0m1.534s 00:06:38.105 07:16:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.105 07:16:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.105 ************************************ 00:06:38.105 END TEST skip_rpc 00:06:38.105 ************************************ 00:06:38.105 07:16:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:38.105 07:16:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.105 07:16:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.105 07:16:06 -- common/autotest_common.sh@10 -- # set +x 00:06:38.366 ************************************ 00:06:38.366 START TEST rpc_client 00:06:38.366 ************************************ 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:38.366 * Looking for test storage... 00:06:38.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.366 07:16:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.366 --rc genhtml_branch_coverage=1 00:06:38.366 --rc genhtml_function_coverage=1 00:06:38.366 --rc genhtml_legend=1 00:06:38.366 --rc geninfo_all_blocks=1 00:06:38.366 --rc geninfo_unexecuted_blocks=1 00:06:38.366 00:06:38.366 ' 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.366 --rc genhtml_branch_coverage=1 00:06:38.366 --rc genhtml_function_coverage=1 00:06:38.366 --rc genhtml_legend=1 00:06:38.366 --rc geninfo_all_blocks=1 00:06:38.366 --rc geninfo_unexecuted_blocks=1 00:06:38.366 00:06:38.366 ' 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.366 --rc genhtml_branch_coverage=1 00:06:38.366 --rc genhtml_function_coverage=1 00:06:38.366 --rc genhtml_legend=1 00:06:38.366 --rc geninfo_all_blocks=1 00:06:38.366 --rc geninfo_unexecuted_blocks=1 00:06:38.366 00:06:38.366 ' 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.366 --rc genhtml_branch_coverage=1 00:06:38.366 --rc genhtml_function_coverage=1 00:06:38.366 --rc genhtml_legend=1 00:06:38.366 --rc geninfo_all_blocks=1 00:06:38.366 --rc geninfo_unexecuted_blocks=1 00:06:38.366 00:06:38.366 ' 00:06:38.366 07:16:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:38.366 OK 00:06:38.366 07:16:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:38.366 00:06:38.366 real 0m0.231s 00:06:38.366 user 0m0.136s 00:06:38.366 sys 0m0.109s 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.366 07:16:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:38.366 ************************************ 00:06:38.366 END TEST rpc_client 00:06:38.366 ************************************ 00:06:38.629 07:16:06 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:38.629 07:16:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.629 07:16:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.629 07:16:06 -- common/autotest_common.sh@10 -- # set +x 00:06:38.629 ************************************ 00:06:38.629 START TEST json_config 00:06:38.629 ************************************ 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.629 07:16:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.629 07:16:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.629 07:16:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.629 07:16:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.629 07:16:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.629 07:16:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:38.629 07:16:06 json_config -- scripts/common.sh@345 -- # : 1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.629 07:16:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.629 07:16:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@353 -- # local d=1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.629 07:16:06 json_config -- scripts/common.sh@355 -- # echo 1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.629 07:16:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@353 -- # local d=2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.629 07:16:06 json_config -- scripts/common.sh@355 -- # echo 2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.629 07:16:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.629 07:16:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.629 07:16:06 json_config -- scripts/common.sh@368 -- # return 0 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.629 --rc genhtml_branch_coverage=1 00:06:38.629 --rc genhtml_function_coverage=1 00:06:38.629 --rc genhtml_legend=1 00:06:38.629 --rc geninfo_all_blocks=1 00:06:38.629 --rc geninfo_unexecuted_blocks=1 00:06:38.629 00:06:38.629 ' 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.629 --rc genhtml_branch_coverage=1 00:06:38.629 --rc genhtml_function_coverage=1 00:06:38.629 --rc genhtml_legend=1 00:06:38.629 --rc geninfo_all_blocks=1 00:06:38.629 --rc geninfo_unexecuted_blocks=1 00:06:38.629 00:06:38.629 ' 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.629 --rc genhtml_branch_coverage=1 00:06:38.629 --rc genhtml_function_coverage=1 00:06:38.629 --rc genhtml_legend=1 00:06:38.629 --rc geninfo_all_blocks=1 00:06:38.629 --rc geninfo_unexecuted_blocks=1 00:06:38.629 00:06:38.629 ' 00:06:38.629 07:16:06 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.629 --rc genhtml_branch_coverage=1 00:06:38.629 --rc genhtml_function_coverage=1 00:06:38.629 --rc genhtml_legend=1 00:06:38.629 --rc geninfo_all_blocks=1 00:06:38.629 --rc geninfo_unexecuted_blocks=1 00:06:38.629 00:06:38.629 ' 00:06:38.629 07:16:06 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.629 07:16:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.629 07:16:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.629 07:16:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.629 07:16:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.629 07:16:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.629 07:16:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.629 07:16:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.629 07:16:06 json_config -- paths/export.sh@5 -- # export PATH 00:06:38.629 07:16:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@51 -- # : 0 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.629 07:16:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.630 07:16:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.630 07:16:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:38.630 INFO: JSON configuration test init 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:38.630 07:16:06 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:38.630 07:16:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.630 07:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.891 07:16:06 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.891 07:16:06 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:38.891 07:16:06 json_config -- json_config/common.sh@9 -- # local app=target 00:06:38.891 07:16:06 json_config -- json_config/common.sh@10 -- # shift 00:06:38.891 07:16:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:38.891 07:16:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:38.891 07:16:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:38.891 07:16:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.891 07:16:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:38.891 07:16:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1221642 00:06:38.891 07:16:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:38.891 Waiting for target to run... 00:06:38.891 07:16:06 json_config -- json_config/common.sh@25 -- # waitforlisten 1221642 /var/tmp/spdk_tgt.sock 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 1221642 ']' 00:06:38.891 07:16:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:38.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.891 07:16:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.891 [2024-11-26 07:16:06.793693] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:38.891 [2024-11-26 07:16:06.793764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221642 ] 00:06:39.152 [2024-11-26 07:16:07.082958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.152 [2024-11-26 07:16:07.111614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:39.723 07:16:07 json_config -- json_config/common.sh@26 -- # echo '' 00:06:39.723 00:06:39.723 07:16:07 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:39.723 07:16:07 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.723 07:16:07 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:39.723 07:16:07 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.723 07:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.724 07:16:07 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:39.724 07:16:07 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:39.724 07:16:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:40.296 07:16:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.296 07:16:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:40.296 07:16:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@54 -- # sort 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:40.296 07:16:08 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:40.296 07:16:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.296 07:16:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:40.557 07:16:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.557 07:16:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:40.557 07:16:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:40.557 MallocForNvmf0 00:06:40.557 07:16:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:40.557 07:16:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:40.817 MallocForNvmf1 00:06:40.817 07:16:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.817 07:16:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:41.079 [2024-11-26 07:16:08.933330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.079 07:16:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:41.079 07:16:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:41.079 07:16:09 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:41.079 07:16:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:41.347 07:16:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:41.347 07:16:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:41.607 07:16:09 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.608 07:16:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.608 [2024-11-26 07:16:09.651501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.608 07:16:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:41.608 07:16:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.608 07:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.869 07:16:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:41.869 07:16:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.869 07:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.869 07:16:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:41.869 07:16:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.869 07:16:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.869 MallocBdevForConfigChangeCheck 00:06:41.869 07:16:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:41.869 07:16:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.869 07:16:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:42.130 07:16:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:42.130 07:16:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.390 07:16:10 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:42.390 INFO: shutting down applications... 00:06:42.390 07:16:10 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:42.390 07:16:10 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:42.390 07:16:10 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:42.390 07:16:10 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:42.651 Calling clear_iscsi_subsystem 00:06:42.651 Calling clear_nvmf_subsystem 00:06:42.651 Calling clear_nbd_subsystem 00:06:42.651 Calling clear_ublk_subsystem 00:06:42.651 Calling clear_vhost_blk_subsystem 00:06:42.651 Calling clear_vhost_scsi_subsystem 00:06:42.651 Calling clear_bdev_subsystem 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:42.651 07:16:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:43.224 07:16:11 json_config -- json_config/json_config.sh@352 -- # break 00:06:43.224 07:16:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:43.224 07:16:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:43.224 07:16:11 json_config -- json_config/common.sh@31 -- # local app=target 00:06:43.224 07:16:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:43.224 07:16:11 json_config -- json_config/common.sh@35 -- # [[ -n 1221642 ]] 00:06:43.224 07:16:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1221642 00:06:43.224 07:16:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:43.224 07:16:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.224 07:16:11 json_config -- json_config/common.sh@41 -- # kill -0 1221642 00:06:43.224 07:16:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.486 07:16:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.486 07:16:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.486 07:16:11 json_config -- json_config/common.sh@41 -- # kill -0 1221642 00:06:43.486 07:16:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:43.486 07:16:11 json_config -- json_config/common.sh@43 -- # break 00:06:43.486 07:16:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:43.486 07:16:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:43.486 SPDK target shutdown done 00:06:43.486 07:16:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:43.486 INFO: relaunching applications... 00:06:43.486 07:16:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.486 07:16:11 json_config -- json_config/common.sh@9 -- # local app=target 00:06:43.486 07:16:11 json_config -- json_config/common.sh@10 -- # shift 00:06:43.486 07:16:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.486 07:16:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.486 07:16:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.486 07:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.747 07:16:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.747 07:16:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1222782 00:06:43.747 07:16:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.747 Waiting for target to run... 00:06:43.747 07:16:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1222782 /var/tmp/spdk_tgt.sock 00:06:43.747 07:16:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 1222782 ']' 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.747 07:16:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.747 [2024-11-26 07:16:11.637850] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:43.747 [2024-11-26 07:16:11.637909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222782 ] 00:06:44.007 [2024-11-26 07:16:11.972429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.007 [2024-11-26 07:16:11.997461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.578 [2024-11-26 07:16:12.499400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.578 [2024-11-26 07:16:12.531757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:44.578 07:16:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.578 07:16:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:44.578 07:16:12 json_config -- json_config/common.sh@26 -- # echo '' 00:06:44.578 00:06:44.578 07:16:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:44.578 07:16:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:44.578 INFO: Checking if target configuration is the same... 00:06:44.578 07:16:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:44.578 07:16:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.578 07:16:12 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.578 + '[' 2 -ne 2 ']' 00:06:44.578 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:44.578 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:44.578 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:44.578 +++ basename /dev/fd/62 00:06:44.578 ++ mktemp /tmp/62.XXX 00:06:44.578 + tmp_file_1=/tmp/62.Qn6 00:06:44.578 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:44.578 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.578 + tmp_file_2=/tmp/spdk_tgt_config.json.Q9d 00:06:44.578 + ret=0 00:06:44.578 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:44.838 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.099 + diff -u /tmp/62.Qn6 /tmp/spdk_tgt_config.json.Q9d 00:06:45.099 + echo 'INFO: JSON config files are the same' 00:06:45.099 INFO: JSON config files are the same 00:06:45.100 + rm /tmp/62.Qn6 /tmp/spdk_tgt_config.json.Q9d 00:06:45.100 + exit 0 00:06:45.100 07:16:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:45.100 07:16:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:45.100 INFO: changing configuration and checking if this can be detected... 00:06:45.100 07:16:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:45.100 07:16:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:45.100 07:16:13 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.100 07:16:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:45.100 07:16:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:45.100 + '[' 2 -ne 2 ']' 00:06:45.100 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:45.100 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:45.100 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:45.100 +++ basename /dev/fd/62 00:06:45.100 ++ mktemp /tmp/62.XXX 00:06:45.100 + tmp_file_1=/tmp/62.qKS 00:06:45.100 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.100 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:45.100 + tmp_file_2=/tmp/spdk_tgt_config.json.bbq 00:06:45.100 + ret=0 00:06:45.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.670 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:45.670 + diff -u /tmp/62.qKS /tmp/spdk_tgt_config.json.bbq 00:06:45.670 + ret=1 00:06:45.670 + echo '=== Start of file: /tmp/62.qKS ===' 00:06:45.670 + cat /tmp/62.qKS 00:06:45.670 + echo '=== End of file: /tmp/62.qKS ===' 00:06:45.670 + echo '' 00:06:45.670 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bbq ===' 00:06:45.670 + cat /tmp/spdk_tgt_config.json.bbq 00:06:45.670 + echo '=== End of file: /tmp/spdk_tgt_config.json.bbq ===' 00:06:45.670 + echo '' 00:06:45.670 + rm /tmp/62.qKS /tmp/spdk_tgt_config.json.bbq 00:06:45.670 + exit 1 00:06:45.670 07:16:13 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:45.671 INFO: configuration change detected. 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@324 -- # [[ -n 1222782 ]] 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.671 07:16:13 json_config -- json_config/json_config.sh@330 -- # killprocess 1222782 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@954 -- # '[' -z 1222782 ']' 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@958 -- # kill -0 1222782 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@959 -- # uname 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1222782 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1222782' 00:06:45.671 killing process with pid 1222782 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@973 -- # kill 1222782 00:06:45.671 07:16:13 json_config -- common/autotest_common.sh@978 -- # wait 1222782 00:06:45.932 07:16:13 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:45.932 07:16:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:45.932 07:16:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.932 07:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.932 07:16:13 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:45.932 07:16:13 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:45.932 INFO: Success 00:06:45.932 00:06:45.932 real 0m7.440s 00:06:45.932 user 0m9.059s 00:06:45.932 sys 0m1.947s 00:06:45.932 07:16:13 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.932 07:16:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.932 ************************************ 00:06:45.932 END TEST json_config 00:06:45.932 ************************************ 00:06:45.932 07:16:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:45.932 07:16:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.932 07:16:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.932 07:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:46.193 ************************************ 00:06:46.193 START TEST json_config_extra_key 00:06:46.193 ************************************ 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.193 07:16:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.193 07:16:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.194 --rc genhtml_branch_coverage=1 00:06:46.194 --rc genhtml_function_coverage=1 00:06:46.194 --rc genhtml_legend=1 00:06:46.194 --rc geninfo_all_blocks=1 00:06:46.194 --rc geninfo_unexecuted_blocks=1 00:06:46.194 00:06:46.194 ' 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.194 --rc genhtml_branch_coverage=1 00:06:46.194 --rc genhtml_function_coverage=1 00:06:46.194 --rc genhtml_legend=1 00:06:46.194 --rc geninfo_all_blocks=1 00:06:46.194 --rc geninfo_unexecuted_blocks=1 00:06:46.194 00:06:46.194 ' 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.194 --rc genhtml_branch_coverage=1 00:06:46.194 --rc genhtml_function_coverage=1 00:06:46.194 --rc genhtml_legend=1 00:06:46.194 --rc geninfo_all_blocks=1 00:06:46.194 --rc geninfo_unexecuted_blocks=1 00:06:46.194 00:06:46.194 ' 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.194 --rc genhtml_branch_coverage=1 00:06:46.194 --rc genhtml_function_coverage=1 00:06:46.194 --rc genhtml_legend=1 00:06:46.194 --rc geninfo_all_blocks=1 00:06:46.194 --rc geninfo_unexecuted_blocks=1 00:06:46.194 00:06:46.194 ' 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.194 07:16:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.194 07:16:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.194 07:16:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.194 07:16:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.194 07:16:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.194 07:16:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.194 07:16:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.194 07:16:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:46.194 07:16:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.194 07:16:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:46.194 INFO: launching applications... 00:06:46.194 07:16:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1223265 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:46.194 Waiting for target to run... 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1223265 /var/tmp/spdk_tgt.sock 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1223265 ']' 00:06:46.194 07:16:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:46.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.194 07:16:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.454 [2024-11-26 07:16:14.295563] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:46.454 [2024-11-26 07:16:14.295636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223265 ] 00:06:46.714 [2024-11-26 07:16:14.736432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.714 [2024-11-26 07:16:14.770806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.284 07:16:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.284 07:16:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:47.284 00:06:47.284 07:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:47.284 INFO: shutting down applications... 00:06:47.284 07:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1223265 ]] 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1223265 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1223265 00:06:47.284 07:16:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1223265 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.563 07:16:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.563 SPDK target shutdown done 00:06:47.563 07:16:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:47.563 Success 00:06:47.563 00:06:47.563 real 0m1.583s 00:06:47.563 user 0m1.065s 00:06:47.563 sys 0m0.549s 00:06:47.563 07:16:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.563 07:16:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.563 ************************************ 00:06:47.563 END TEST json_config_extra_key 00:06:47.563 ************************************ 00:06:47.563 07:16:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.563 07:16:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.563 07:16:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.563 07:16:15 -- common/autotest_common.sh@10 -- # set +x 00:06:47.823 ************************************ 00:06:47.823 START TEST alias_rpc 00:06:47.823 ************************************ 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.823 * Looking for test storage... 00:06:47.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.823 07:16:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.823 --rc genhtml_branch_coverage=1 00:06:47.823 --rc genhtml_function_coverage=1 00:06:47.823 --rc genhtml_legend=1 00:06:47.823 --rc geninfo_all_blocks=1 00:06:47.823 --rc geninfo_unexecuted_blocks=1 00:06:47.823 00:06:47.823 ' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.823 --rc genhtml_branch_coverage=1 00:06:47.823 --rc genhtml_function_coverage=1 00:06:47.823 --rc genhtml_legend=1 00:06:47.823 --rc geninfo_all_blocks=1 00:06:47.823 --rc geninfo_unexecuted_blocks=1 00:06:47.823 00:06:47.823 ' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.823 --rc genhtml_branch_coverage=1 00:06:47.823 --rc genhtml_function_coverage=1 00:06:47.823 --rc genhtml_legend=1 00:06:47.823 --rc geninfo_all_blocks=1 00:06:47.823 --rc geninfo_unexecuted_blocks=1 00:06:47.823 00:06:47.823 ' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.823 --rc genhtml_branch_coverage=1 00:06:47.823 --rc genhtml_function_coverage=1 00:06:47.823 --rc genhtml_legend=1 00:06:47.823 --rc geninfo_all_blocks=1 00:06:47.823 --rc geninfo_unexecuted_blocks=1 00:06:47.823 00:06:47.823 ' 00:06:47.823 07:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.823 07:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1223650 00:06:47.823 07:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1223650 00:06:47.823 07:16:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1223650 ']' 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.823 07:16:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.082 [2024-11-26 07:16:15.953791] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:48.082 [2024-11-26 07:16:15.953870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223650 ] 00:06:48.082 [2024-11-26 07:16:16.043552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.082 [2024-11-26 07:16:16.078661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:49.019 07:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:49.019 07:16:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1223650 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1223650 ']' 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1223650 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.019 07:16:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223650 00:06:49.019 07:16:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.019 07:16:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.019 07:16:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223650' 00:06:49.019 killing process with pid 1223650 00:06:49.019 07:16:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 1223650 00:06:49.019 07:16:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 1223650 00:06:49.279 00:06:49.279 real 0m1.511s 00:06:49.279 user 0m1.639s 00:06:49.279 sys 0m0.448s 00:06:49.279 07:16:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.279 07:16:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 ************************************ 00:06:49.279 END TEST alias_rpc 00:06:49.279 ************************************ 00:06:49.279 07:16:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:49.279 07:16:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:49.279 07:16:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.279 07:16:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.279 07:16:17 -- common/autotest_common.sh@10 -- # set +x 00:06:49.279 ************************************ 00:06:49.279 START TEST spdkcli_tcp 00:06:49.279 ************************************ 00:06:49.279 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:49.279 * Looking for test storage... 00:06:49.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.539 07:16:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.539 --rc genhtml_branch_coverage=1 00:06:49.539 --rc genhtml_function_coverage=1 00:06:49.539 --rc genhtml_legend=1 00:06:49.539 --rc geninfo_all_blocks=1 00:06:49.539 --rc geninfo_unexecuted_blocks=1 00:06:49.539 00:06:49.539 ' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.539 --rc genhtml_branch_coverage=1 00:06:49.539 --rc genhtml_function_coverage=1 00:06:49.539 --rc genhtml_legend=1 00:06:49.539 --rc geninfo_all_blocks=1 00:06:49.539 --rc geninfo_unexecuted_blocks=1 00:06:49.539 00:06:49.539 ' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.539 --rc genhtml_branch_coverage=1 00:06:49.539 --rc genhtml_function_coverage=1 00:06:49.539 --rc genhtml_legend=1 00:06:49.539 --rc geninfo_all_blocks=1 00:06:49.539 --rc geninfo_unexecuted_blocks=1 00:06:49.539 00:06:49.539 ' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.539 --rc genhtml_branch_coverage=1 00:06:49.539 --rc genhtml_function_coverage=1 00:06:49.539 --rc genhtml_legend=1 00:06:49.539 --rc geninfo_all_blocks=1 00:06:49.539 --rc geninfo_unexecuted_blocks=1 00:06:49.539 00:06:49.539 ' 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1224047 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1224047 00:06:49.539 07:16:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1224047 ']' 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.539 07:16:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.539 [2024-11-26 07:16:17.537691] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:49.539 [2024-11-26 07:16:17.537745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224047 ] 00:06:49.539 [2024-11-26 07:16:17.615616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.799 [2024-11-26 07:16:17.648240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.799 [2024-11-26 07:16:17.648445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.369 07:16:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.369 07:16:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:50.369 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1224373 00:06:50.369 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:50.369 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:50.629 [ 00:06:50.629 "bdev_malloc_delete", 00:06:50.629 "bdev_malloc_create", 00:06:50.629 "bdev_null_resize", 00:06:50.629 "bdev_null_delete", 00:06:50.629 "bdev_null_create", 00:06:50.629 "bdev_nvme_cuse_unregister", 00:06:50.629 "bdev_nvme_cuse_register", 00:06:50.629 "bdev_opal_new_user", 00:06:50.629 "bdev_opal_set_lock_state", 00:06:50.629 "bdev_opal_delete", 00:06:50.629 "bdev_opal_get_info", 00:06:50.629 "bdev_opal_create", 00:06:50.629 "bdev_nvme_opal_revert", 00:06:50.629 "bdev_nvme_opal_init", 00:06:50.629 "bdev_nvme_send_cmd", 00:06:50.629 "bdev_nvme_set_keys", 00:06:50.629 "bdev_nvme_get_path_iostat", 00:06:50.629 "bdev_nvme_get_mdns_discovery_info", 00:06:50.629 "bdev_nvme_stop_mdns_discovery", 00:06:50.629 "bdev_nvme_start_mdns_discovery", 00:06:50.629 "bdev_nvme_set_multipath_policy", 00:06:50.629 "bdev_nvme_set_preferred_path", 00:06:50.629 "bdev_nvme_get_io_paths", 00:06:50.629 "bdev_nvme_remove_error_injection", 00:06:50.629 "bdev_nvme_add_error_injection", 00:06:50.629 "bdev_nvme_get_discovery_info", 00:06:50.629 "bdev_nvme_stop_discovery", 00:06:50.629 "bdev_nvme_start_discovery", 00:06:50.629 "bdev_nvme_get_controller_health_info", 00:06:50.629 "bdev_nvme_disable_controller", 00:06:50.629 "bdev_nvme_enable_controller", 00:06:50.629 "bdev_nvme_reset_controller", 00:06:50.629 "bdev_nvme_get_transport_statistics", 00:06:50.629 "bdev_nvme_apply_firmware", 00:06:50.629 "bdev_nvme_detach_controller", 00:06:50.629 "bdev_nvme_get_controllers", 00:06:50.629 "bdev_nvme_attach_controller", 00:06:50.629 "bdev_nvme_set_hotplug", 00:06:50.629 "bdev_nvme_set_options", 00:06:50.629 "bdev_passthru_delete", 00:06:50.629 "bdev_passthru_create", 00:06:50.629 "bdev_lvol_set_parent_bdev", 00:06:50.629 "bdev_lvol_set_parent", 00:06:50.629 "bdev_lvol_check_shallow_copy", 00:06:50.629 "bdev_lvol_start_shallow_copy", 00:06:50.629 "bdev_lvol_grow_lvstore", 00:06:50.629 "bdev_lvol_get_lvols", 00:06:50.629 "bdev_lvol_get_lvstores", 00:06:50.629 "bdev_lvol_delete", 00:06:50.629 "bdev_lvol_set_read_only", 00:06:50.629 "bdev_lvol_resize", 00:06:50.629 "bdev_lvol_decouple_parent", 00:06:50.629 "bdev_lvol_inflate", 00:06:50.629 "bdev_lvol_rename", 00:06:50.629 "bdev_lvol_clone_bdev", 00:06:50.629 "bdev_lvol_clone", 00:06:50.629 "bdev_lvol_snapshot", 00:06:50.629 "bdev_lvol_create", 00:06:50.629 "bdev_lvol_delete_lvstore", 00:06:50.629 "bdev_lvol_rename_lvstore", 00:06:50.629 "bdev_lvol_create_lvstore", 00:06:50.629 "bdev_raid_set_options", 00:06:50.629 "bdev_raid_remove_base_bdev", 00:06:50.629 "bdev_raid_add_base_bdev", 00:06:50.629 "bdev_raid_delete", 00:06:50.629 "bdev_raid_create", 00:06:50.629 "bdev_raid_get_bdevs", 00:06:50.629 "bdev_error_inject_error", 00:06:50.629 "bdev_error_delete", 00:06:50.629 "bdev_error_create", 00:06:50.629 "bdev_split_delete", 00:06:50.629 "bdev_split_create", 00:06:50.629 "bdev_delay_delete", 00:06:50.629 "bdev_delay_create", 00:06:50.629 "bdev_delay_update_latency", 00:06:50.629 "bdev_zone_block_delete", 00:06:50.629 "bdev_zone_block_create", 00:06:50.629 "blobfs_create", 00:06:50.629 "blobfs_detect", 00:06:50.629 "blobfs_set_cache_size", 00:06:50.629 "bdev_aio_delete", 00:06:50.629 "bdev_aio_rescan", 00:06:50.629 "bdev_aio_create", 00:06:50.629 "bdev_ftl_set_property", 00:06:50.629 "bdev_ftl_get_properties", 00:06:50.629 "bdev_ftl_get_stats", 00:06:50.629 "bdev_ftl_unmap", 00:06:50.629 "bdev_ftl_unload", 00:06:50.629 "bdev_ftl_delete", 00:06:50.629 "bdev_ftl_load", 00:06:50.629 "bdev_ftl_create", 00:06:50.629 "bdev_virtio_attach_controller", 00:06:50.629 "bdev_virtio_scsi_get_devices", 00:06:50.629 "bdev_virtio_detach_controller", 00:06:50.629 "bdev_virtio_blk_set_hotplug", 00:06:50.629 "bdev_iscsi_delete", 00:06:50.629 "bdev_iscsi_create", 00:06:50.629 "bdev_iscsi_set_options", 00:06:50.629 "accel_error_inject_error", 00:06:50.629 "ioat_scan_accel_module", 00:06:50.629 "dsa_scan_accel_module", 00:06:50.629 "iaa_scan_accel_module", 00:06:50.629 "vfu_virtio_create_fs_endpoint", 00:06:50.629 "vfu_virtio_create_scsi_endpoint", 00:06:50.629 "vfu_virtio_scsi_remove_target", 00:06:50.629 "vfu_virtio_scsi_add_target", 00:06:50.629 "vfu_virtio_create_blk_endpoint", 00:06:50.629 "vfu_virtio_delete_endpoint", 00:06:50.629 "keyring_file_remove_key", 00:06:50.629 "keyring_file_add_key", 00:06:50.629 "keyring_linux_set_options", 00:06:50.629 "fsdev_aio_delete", 00:06:50.629 "fsdev_aio_create", 00:06:50.629 "iscsi_get_histogram", 00:06:50.629 "iscsi_enable_histogram", 00:06:50.629 "iscsi_set_options", 00:06:50.629 "iscsi_get_auth_groups", 00:06:50.629 "iscsi_auth_group_remove_secret", 00:06:50.629 "iscsi_auth_group_add_secret", 00:06:50.629 "iscsi_delete_auth_group", 00:06:50.629 "iscsi_create_auth_group", 00:06:50.629 "iscsi_set_discovery_auth", 00:06:50.629 "iscsi_get_options", 00:06:50.629 "iscsi_target_node_request_logout", 00:06:50.629 "iscsi_target_node_set_redirect", 00:06:50.629 "iscsi_target_node_set_auth", 00:06:50.629 "iscsi_target_node_add_lun", 00:06:50.629 "iscsi_get_stats", 00:06:50.629 "iscsi_get_connections", 00:06:50.629 "iscsi_portal_group_set_auth", 00:06:50.629 "iscsi_start_portal_group", 00:06:50.629 "iscsi_delete_portal_group", 00:06:50.629 "iscsi_create_portal_group", 00:06:50.629 "iscsi_get_portal_groups", 00:06:50.629 "iscsi_delete_target_node", 00:06:50.629 "iscsi_target_node_remove_pg_ig_maps", 00:06:50.629 "iscsi_target_node_add_pg_ig_maps", 00:06:50.629 "iscsi_create_target_node", 00:06:50.629 "iscsi_get_target_nodes", 00:06:50.629 "iscsi_delete_initiator_group", 00:06:50.629 "iscsi_initiator_group_remove_initiators", 00:06:50.629 "iscsi_initiator_group_add_initiators", 00:06:50.629 "iscsi_create_initiator_group", 00:06:50.629 "iscsi_get_initiator_groups", 00:06:50.629 "nvmf_set_crdt", 00:06:50.629 "nvmf_set_config", 00:06:50.629 "nvmf_set_max_subsystems", 00:06:50.629 "nvmf_stop_mdns_prr", 00:06:50.629 "nvmf_publish_mdns_prr", 00:06:50.629 "nvmf_subsystem_get_listeners", 00:06:50.629 "nvmf_subsystem_get_qpairs", 00:06:50.629 "nvmf_subsystem_get_controllers", 00:06:50.629 "nvmf_get_stats", 00:06:50.629 "nvmf_get_transports", 00:06:50.629 "nvmf_create_transport", 00:06:50.629 "nvmf_get_targets", 00:06:50.629 "nvmf_delete_target", 00:06:50.629 "nvmf_create_target", 00:06:50.629 "nvmf_subsystem_allow_any_host", 00:06:50.629 "nvmf_subsystem_set_keys", 00:06:50.629 "nvmf_subsystem_remove_host", 00:06:50.629 "nvmf_subsystem_add_host", 00:06:50.630 "nvmf_ns_remove_host", 00:06:50.630 "nvmf_ns_add_host", 00:06:50.630 "nvmf_subsystem_remove_ns", 00:06:50.630 "nvmf_subsystem_set_ns_ana_group", 00:06:50.630 "nvmf_subsystem_add_ns", 00:06:50.630 "nvmf_subsystem_listener_set_ana_state", 00:06:50.630 "nvmf_discovery_get_referrals", 00:06:50.630 "nvmf_discovery_remove_referral", 00:06:50.630 "nvmf_discovery_add_referral", 00:06:50.630 "nvmf_subsystem_remove_listener", 00:06:50.630 "nvmf_subsystem_add_listener", 00:06:50.630 "nvmf_delete_subsystem", 00:06:50.630 "nvmf_create_subsystem", 00:06:50.630 "nvmf_get_subsystems", 00:06:50.630 "env_dpdk_get_mem_stats", 00:06:50.630 "nbd_get_disks", 00:06:50.630 "nbd_stop_disk", 00:06:50.630 "nbd_start_disk", 00:06:50.630 "ublk_recover_disk", 00:06:50.630 "ublk_get_disks", 00:06:50.630 "ublk_stop_disk", 00:06:50.630 "ublk_start_disk", 00:06:50.630 "ublk_destroy_target", 00:06:50.630 "ublk_create_target", 00:06:50.630 "virtio_blk_create_transport", 00:06:50.630 "virtio_blk_get_transports", 00:06:50.630 "vhost_controller_set_coalescing", 00:06:50.630 "vhost_get_controllers", 00:06:50.630 "vhost_delete_controller", 00:06:50.630 "vhost_create_blk_controller", 00:06:50.630 "vhost_scsi_controller_remove_target", 00:06:50.630 "vhost_scsi_controller_add_target", 00:06:50.630 "vhost_start_scsi_controller", 00:06:50.630 "vhost_create_scsi_controller", 00:06:50.630 "thread_set_cpumask", 00:06:50.630 "scheduler_set_options", 00:06:50.630 "framework_get_governor", 00:06:50.630 "framework_get_scheduler", 00:06:50.630 "framework_set_scheduler", 00:06:50.630 "framework_get_reactors", 00:06:50.630 "thread_get_io_channels", 00:06:50.630 "thread_get_pollers", 00:06:50.630 "thread_get_stats", 00:06:50.630 "framework_monitor_context_switch", 00:06:50.630 "spdk_kill_instance", 00:06:50.630 "log_enable_timestamps", 00:06:50.630 "log_get_flags", 00:06:50.630 "log_clear_flag", 00:06:50.630 "log_set_flag", 00:06:50.630 "log_get_level", 00:06:50.630 "log_set_level", 00:06:50.630 "log_get_print_level", 00:06:50.630 "log_set_print_level", 00:06:50.630 "framework_enable_cpumask_locks", 00:06:50.630 "framework_disable_cpumask_locks", 00:06:50.630 "framework_wait_init", 00:06:50.630 "framework_start_init", 00:06:50.630 "scsi_get_devices", 00:06:50.630 "bdev_get_histogram", 00:06:50.630 "bdev_enable_histogram", 00:06:50.630 "bdev_set_qos_limit", 00:06:50.630 "bdev_set_qd_sampling_period", 00:06:50.630 "bdev_get_bdevs", 00:06:50.630 "bdev_reset_iostat", 00:06:50.630 "bdev_get_iostat", 00:06:50.630 "bdev_examine", 00:06:50.630 "bdev_wait_for_examine", 00:06:50.630 "bdev_set_options", 00:06:50.630 "accel_get_stats", 00:06:50.630 "accel_set_options", 00:06:50.630 "accel_set_driver", 00:06:50.630 "accel_crypto_key_destroy", 00:06:50.630 "accel_crypto_keys_get", 00:06:50.630 "accel_crypto_key_create", 00:06:50.630 "accel_assign_opc", 00:06:50.630 "accel_get_module_info", 00:06:50.630 "accel_get_opc_assignments", 00:06:50.630 "vmd_rescan", 00:06:50.630 "vmd_remove_device", 00:06:50.630 "vmd_enable", 00:06:50.630 "sock_get_default_impl", 00:06:50.630 "sock_set_default_impl", 00:06:50.630 "sock_impl_set_options", 00:06:50.630 "sock_impl_get_options", 00:06:50.630 "iobuf_get_stats", 00:06:50.630 "iobuf_set_options", 00:06:50.630 "keyring_get_keys", 00:06:50.630 "vfu_tgt_set_base_path", 00:06:50.630 "framework_get_pci_devices", 00:06:50.630 "framework_get_config", 00:06:50.630 "framework_get_subsystems", 00:06:50.630 "fsdev_set_opts", 00:06:50.630 "fsdev_get_opts", 00:06:50.630 "trace_get_info", 00:06:50.630 "trace_get_tpoint_group_mask", 00:06:50.630 "trace_disable_tpoint_group", 00:06:50.630 "trace_enable_tpoint_group", 00:06:50.630 "trace_clear_tpoint_mask", 00:06:50.630 "trace_set_tpoint_mask", 00:06:50.630 "notify_get_notifications", 00:06:50.630 "notify_get_types", 00:06:50.630 "spdk_get_version", 00:06:50.630 "rpc_get_methods" 00:06:50.630 ] 00:06:50.630 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.630 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:50.630 07:16:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1224047 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1224047 ']' 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1224047 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224047 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224047' 00:06:50.630 killing process with pid 1224047 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1224047 00:06:50.630 07:16:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1224047 00:06:50.890 00:06:50.890 real 0m1.520s 00:06:50.890 user 0m2.806s 00:06:50.890 sys 0m0.434s 00:06:50.890 07:16:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.890 07:16:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.890 ************************************ 00:06:50.890 END TEST spdkcli_tcp 00:06:50.890 ************************************ 00:06:50.890 07:16:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.890 07:16:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.890 07:16:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.890 07:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:50.890 ************************************ 00:06:50.890 START TEST dpdk_mem_utility 00:06:50.891 ************************************ 00:06:50.891 07:16:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.891 * Looking for test storage... 00:06:50.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:50.891 07:16:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.891 07:16:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.891 07:16:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.151 07:16:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.151 --rc genhtml_branch_coverage=1 00:06:51.151 --rc genhtml_function_coverage=1 00:06:51.151 --rc genhtml_legend=1 00:06:51.151 --rc geninfo_all_blocks=1 00:06:51.151 --rc geninfo_unexecuted_blocks=1 00:06:51.151 00:06:51.151 ' 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.151 --rc genhtml_branch_coverage=1 00:06:51.151 --rc genhtml_function_coverage=1 00:06:51.151 --rc genhtml_legend=1 00:06:51.151 --rc geninfo_all_blocks=1 00:06:51.151 --rc geninfo_unexecuted_blocks=1 00:06:51.151 00:06:51.151 ' 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.151 --rc genhtml_branch_coverage=1 00:06:51.151 --rc genhtml_function_coverage=1 00:06:51.151 --rc genhtml_legend=1 00:06:51.151 --rc geninfo_all_blocks=1 00:06:51.151 --rc geninfo_unexecuted_blocks=1 00:06:51.151 00:06:51.151 ' 00:06:51.151 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.151 --rc genhtml_branch_coverage=1 00:06:51.151 --rc genhtml_function_coverage=1 00:06:51.151 --rc genhtml_legend=1 00:06:51.151 --rc geninfo_all_blocks=1 00:06:51.151 --rc geninfo_unexecuted_blocks=1 00:06:51.151 00:06:51.151 ' 00:06:51.151 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:51.152 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1224453 00:06:51.152 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1224453 00:06:51.152 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1224453 ']' 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.152 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.152 [2024-11-26 07:16:19.129822] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:51.152 [2024-11-26 07:16:19.129896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224453 ] 00:06:51.152 [2024-11-26 07:16:19.217246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.412 [2024-11-26 07:16:19.252140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.982 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.982 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:51.982 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:51.982 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:51.982 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.982 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.982 { 00:06:51.982 "filename": "/tmp/spdk_mem_dump.txt" 00:06:51.982 } 00:06:51.982 07:16:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.982 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:51.982 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:51.982 1 heaps totaling size 810.000000 MiB 00:06:51.982 size: 810.000000 MiB heap id: 0 00:06:51.982 end heaps---------- 00:06:51.982 9 mempools totaling size 595.772034 MiB 00:06:51.982 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:51.982 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:51.982 size: 92.545471 MiB name: bdev_io_1224453 00:06:51.982 size: 50.003479 MiB name: msgpool_1224453 00:06:51.982 size: 36.509338 MiB name: fsdev_io_1224453 00:06:51.982 size: 21.763794 MiB name: PDU_Pool 00:06:51.982 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:51.982 size: 4.133484 MiB name: evtpool_1224453 00:06:51.982 size: 0.026123 MiB name: Session_Pool 00:06:51.982 end mempools------- 00:06:51.982 6 memzones totaling size 4.142822 MiB 00:06:51.982 size: 1.000366 MiB name: RG_ring_0_1224453 00:06:51.982 size: 1.000366 MiB name: RG_ring_1_1224453 00:06:51.982 size: 1.000366 MiB name: RG_ring_4_1224453 00:06:51.982 size: 1.000366 MiB name: RG_ring_5_1224453 00:06:51.982 size: 0.125366 MiB name: RG_ring_2_1224453 00:06:51.982 size: 0.015991 MiB name: RG_ring_3_1224453 00:06:51.982 end memzones------- 00:06:51.982 07:16:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:51.982 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:51.982 list of free elements. size: 10.862488 MiB 00:06:51.982 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:51.982 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:51.982 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:51.982 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:51.982 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:51.982 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:51.982 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:51.982 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:51.982 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:51.982 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:51.982 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:51.982 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:51.982 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:51.982 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:51.982 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:51.982 list of standard malloc elements. size: 199.218628 MiB 00:06:51.982 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:51.982 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:51.982 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:51.982 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:51.982 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:51.982 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:51.982 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:51.982 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:51.982 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:51.982 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:51.982 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:51.982 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:51.982 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:51.983 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:51.983 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:51.983 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:51.983 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:51.983 list of memzone associated elements. size: 599.918884 MiB 00:06:51.983 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:51.983 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:51.983 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:51.983 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:51.983 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:51.983 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1224453_0 00:06:51.983 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:51.983 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1224453_0 00:06:51.983 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:51.983 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1224453_0 00:06:51.983 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:51.983 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:51.983 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:51.983 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:51.983 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:51.983 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1224453_0 00:06:51.983 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:51.983 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1224453 00:06:51.983 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:51.983 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1224453 00:06:51.983 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:51.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:51.983 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:51.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:51.983 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:51.983 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:51.983 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:51.983 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:51.983 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:51.983 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1224453 00:06:51.983 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:51.983 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1224453 00:06:51.983 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:51.983 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1224453 00:06:51.983 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:51.983 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1224453 00:06:51.983 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:51.983 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1224453 00:06:51.983 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:51.983 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1224453 00:06:51.983 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:51.983 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:51.983 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:51.983 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:51.983 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:51.983 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:51.983 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:51.983 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1224453 00:06:51.983 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:51.983 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1224453 00:06:51.983 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:51.983 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:51.983 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:51.983 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:51.983 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:51.983 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1224453 00:06:51.983 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:51.983 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:51.983 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:51.983 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1224453 00:06:51.983 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:51.983 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1224453 00:06:51.983 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:51.983 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1224453 00:06:51.983 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:51.983 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:51.983 07:16:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:51.983 07:16:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1224453 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1224453 ']' 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1224453 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1224453 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1224453' 00:06:51.983 killing process with pid 1224453 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1224453 00:06:51.983 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1224453 00:06:52.244 00:06:52.244 real 0m1.379s 00:06:52.244 user 0m1.441s 00:06:52.244 sys 0m0.408s 00:06:52.244 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.244 07:16:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 ************************************ 00:06:52.244 END TEST dpdk_mem_utility 00:06:52.244 ************************************ 00:06:52.244 07:16:20 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:52.244 07:16:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.244 07:16:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.244 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:52.244 ************************************ 00:06:52.244 START TEST event 00:06:52.244 ************************************ 00:06:52.244 07:16:20 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:52.505 * Looking for test storage... 00:06:52.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:52.505 07:16:20 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.505 07:16:20 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.505 07:16:20 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.505 07:16:20 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.505 07:16:20 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.505 07:16:20 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.505 07:16:20 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.505 07:16:20 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.505 07:16:20 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.505 07:16:20 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.505 07:16:20 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.505 07:16:20 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.505 07:16:20 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.505 07:16:20 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.505 07:16:20 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.505 07:16:20 event -- scripts/common.sh@344 -- # case "$op" in 00:06:52.505 07:16:20 event -- scripts/common.sh@345 -- # : 1 00:06:52.505 07:16:20 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.505 07:16:20 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.505 07:16:20 event -- scripts/common.sh@365 -- # decimal 1 00:06:52.505 07:16:20 event -- scripts/common.sh@353 -- # local d=1 00:06:52.505 07:16:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.505 07:16:20 event -- scripts/common.sh@355 -- # echo 1 00:06:52.505 07:16:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.505 07:16:20 event -- scripts/common.sh@366 -- # decimal 2 00:06:52.505 07:16:20 event -- scripts/common.sh@353 -- # local d=2 00:06:52.505 07:16:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.505 07:16:20 event -- scripts/common.sh@355 -- # echo 2 00:06:52.505 07:16:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.506 07:16:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.506 07:16:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.506 07:16:20 event -- scripts/common.sh@368 -- # return 0 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.506 --rc genhtml_branch_coverage=1 00:06:52.506 --rc genhtml_function_coverage=1 00:06:52.506 --rc genhtml_legend=1 00:06:52.506 --rc geninfo_all_blocks=1 00:06:52.506 --rc geninfo_unexecuted_blocks=1 00:06:52.506 00:06:52.506 ' 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.506 --rc genhtml_branch_coverage=1 00:06:52.506 --rc genhtml_function_coverage=1 00:06:52.506 --rc genhtml_legend=1 00:06:52.506 --rc geninfo_all_blocks=1 00:06:52.506 --rc geninfo_unexecuted_blocks=1 00:06:52.506 00:06:52.506 ' 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.506 --rc genhtml_branch_coverage=1 00:06:52.506 --rc genhtml_function_coverage=1 00:06:52.506 --rc genhtml_legend=1 00:06:52.506 --rc geninfo_all_blocks=1 00:06:52.506 --rc geninfo_unexecuted_blocks=1 00:06:52.506 00:06:52.506 ' 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.506 --rc genhtml_branch_coverage=1 00:06:52.506 --rc genhtml_function_coverage=1 00:06:52.506 --rc genhtml_legend=1 00:06:52.506 --rc geninfo_all_blocks=1 00:06:52.506 --rc geninfo_unexecuted_blocks=1 00:06:52.506 00:06:52.506 ' 00:06:52.506 07:16:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:52.506 07:16:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.506 07:16:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:52.506 07:16:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.506 07:16:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.506 ************************************ 00:06:52.506 START TEST event_perf 00:06:52.506 ************************************ 00:06:52.506 07:16:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.767 Running I/O for 1 seconds...[2024-11-26 07:16:20.600095] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:52.767 [2024-11-26 07:16:20.600216] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224858 ] 00:06:52.767 [2024-11-26 07:16:20.693388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.767 [2024-11-26 07:16:20.736580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.767 [2024-11-26 07:16:20.736730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.767 [2024-11-26 07:16:20.736884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.767 Running I/O for 1 seconds...[2024-11-26 07:16:20.736885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.710 00:06:53.710 lcore 0: 177712 00:06:53.710 lcore 1: 177715 00:06:53.710 lcore 2: 177712 00:06:53.710 lcore 3: 177713 00:06:53.710 done. 00:06:53.710 00:06:53.710 real 0m1.186s 00:06:53.710 user 0m4.091s 00:06:53.710 sys 0m0.089s 00:06:53.710 07:16:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.710 07:16:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.710 ************************************ 00:06:53.710 END TEST event_perf 00:06:53.710 ************************************ 00:06:53.710 07:16:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.710 07:16:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:53.710 07:16:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.710 07:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.971 ************************************ 00:06:53.971 START TEST event_reactor 00:06:53.971 ************************************ 00:06:53.971 07:16:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:53.971 [2024-11-26 07:16:21.865608] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:53.971 [2024-11-26 07:16:21.865712] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225209 ] 00:06:53.971 [2024-11-26 07:16:21.952053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.971 [2024-11-26 07:16:21.985371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.353 test_start 00:06:55.353 oneshot 00:06:55.353 tick 100 00:06:55.353 tick 100 00:06:55.353 tick 250 00:06:55.353 tick 100 00:06:55.353 tick 100 00:06:55.353 tick 100 00:06:55.353 tick 250 00:06:55.353 tick 500 00:06:55.353 tick 100 00:06:55.353 tick 100 00:06:55.353 tick 250 00:06:55.353 tick 100 00:06:55.353 tick 100 00:06:55.353 test_end 00:06:55.353 00:06:55.353 real 0m1.166s 00:06:55.353 user 0m1.091s 00:06:55.353 sys 0m0.072s 00:06:55.353 07:16:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.353 07:16:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:55.353 ************************************ 00:06:55.353 END TEST event_reactor 00:06:55.353 ************************************ 00:06:55.353 07:16:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.353 07:16:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:55.353 07:16:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.353 07:16:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.353 ************************************ 00:06:55.353 START TEST event_reactor_perf 00:06:55.353 ************************************ 00:06:55.353 07:16:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:55.353 [2024-11-26 07:16:23.112023] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:55.354 [2024-11-26 07:16:23.112129] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225503 ] 00:06:55.354 [2024-11-26 07:16:23.197699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.354 [2024-11-26 07:16:23.229710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.295 test_start 00:06:56.295 test_end 00:06:56.295 Performance: 540850 events per second 00:06:56.295 00:06:56.295 real 0m1.165s 00:06:56.295 user 0m1.084s 00:06:56.295 sys 0m0.078s 00:06:56.295 07:16:24 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.295 07:16:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.295 ************************************ 00:06:56.295 END TEST event_reactor_perf 00:06:56.295 ************************************ 00:06:56.295 07:16:24 event -- event/event.sh@49 -- # uname -s 00:06:56.295 07:16:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:56.295 07:16:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.295 07:16:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.295 07:16:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.295 07:16:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.295 ************************************ 00:06:56.295 START TEST event_scheduler 00:06:56.295 ************************************ 00:06:56.295 07:16:24 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:56.556 * Looking for test storage... 00:06:56.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.556 07:16:24 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.556 --rc genhtml_branch_coverage=1 00:06:56.556 --rc genhtml_function_coverage=1 00:06:56.556 --rc genhtml_legend=1 00:06:56.556 --rc geninfo_all_blocks=1 00:06:56.556 --rc geninfo_unexecuted_blocks=1 00:06:56.556 00:06:56.556 ' 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.556 --rc genhtml_branch_coverage=1 00:06:56.556 --rc genhtml_function_coverage=1 00:06:56.556 --rc genhtml_legend=1 00:06:56.556 --rc geninfo_all_blocks=1 00:06:56.556 --rc geninfo_unexecuted_blocks=1 00:06:56.556 00:06:56.556 ' 00:06:56.556 07:16:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.556 --rc genhtml_branch_coverage=1 00:06:56.556 --rc genhtml_function_coverage=1 00:06:56.557 --rc genhtml_legend=1 00:06:56.557 --rc geninfo_all_blocks=1 00:06:56.557 --rc geninfo_unexecuted_blocks=1 00:06:56.557 00:06:56.557 ' 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.557 --rc genhtml_branch_coverage=1 00:06:56.557 --rc genhtml_function_coverage=1 00:06:56.557 --rc genhtml_legend=1 00:06:56.557 --rc geninfo_all_blocks=1 00:06:56.557 --rc geninfo_unexecuted_blocks=1 00:06:56.557 00:06:56.557 ' 00:06:56.557 07:16:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:56.557 07:16:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1225768 00:06:56.557 07:16:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.557 07:16:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1225768 00:06:56.557 07:16:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1225768 ']' 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.557 07:16:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.557 [2024-11-26 07:16:24.597037] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:06:56.557 [2024-11-26 07:16:24.597114] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225768 ] 00:06:56.817 [2024-11-26 07:16:24.694877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.817 [2024-11-26 07:16:24.751692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.817 [2024-11-26 07:16:24.751860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.817 [2024-11-26 07:16:24.752021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.817 [2024-11-26 07:16:24.752021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:57.389 07:16:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.389 [2024-11-26 07:16:25.422395] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:57.389 [2024-11-26 07:16:25.422413] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:57.389 [2024-11-26 07:16:25.422424] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:57.389 [2024-11-26 07:16:25.422431] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:57.389 [2024-11-26 07:16:25.422436] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.389 07:16:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.389 07:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 [2024-11-26 07:16:25.488667] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:57.650 07:16:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:57.650 07:16:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.650 07:16:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 ************************************ 00:06:57.650 START TEST scheduler_create_thread 00:06:57.650 ************************************ 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 2 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 3 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 4 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 5 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 6 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 7 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 8 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.650 9 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.650 07:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.222 10 00:06:58.222 07:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.222 07:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:58.222 07:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.222 07:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.604 07:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.604 07:16:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:59.604 07:16:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:59.604 07:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.604 07:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.174 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.175 07:16:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:00.175 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.175 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.114 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.114 07:16:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:01.114 07:16:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:01.114 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.114 07:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.685 07:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.685 00:07:01.685 real 0m4.225s 00:07:01.685 user 0m0.024s 00:07:01.685 sys 0m0.008s 00:07:01.685 07:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.685 07:16:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.685 ************************************ 00:07:01.685 END TEST scheduler_create_thread 00:07:01.685 ************************************ 00:07:01.946 07:16:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:01.946 07:16:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1225768 00:07:01.946 07:16:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1225768 ']' 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1225768 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1225768 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1225768' 00:07:01.947 killing process with pid 1225768 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1225768 00:07:01.947 07:16:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1225768 00:07:01.947 [2024-11-26 07:16:30.029996] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:02.209 00:07:02.209 real 0m5.845s 00:07:02.209 user 0m12.883s 00:07:02.209 sys 0m0.446s 00:07:02.209 07:16:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.209 07:16:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:02.209 ************************************ 00:07:02.209 END TEST event_scheduler 00:07:02.209 ************************************ 00:07:02.209 07:16:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:02.209 07:16:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:02.209 07:16:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.209 07:16:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.209 07:16:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.209 ************************************ 00:07:02.209 START TEST app_repeat 00:07:02.209 ************************************ 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1227018 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1227018' 00:07:02.209 Process app_repeat pid: 1227018 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:02.209 spdk_app_start Round 0 00:07:02.209 07:16:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227018 /var/tmp/spdk-nbd.sock 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1227018 ']' 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.209 07:16:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.209 [2024-11-26 07:16:30.297897] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:02.209 [2024-11-26 07:16:30.297979] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227018 ] 00:07:02.470 [2024-11-26 07:16:30.383867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.470 [2024-11-26 07:16:30.416073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.470 [2024-11-26 07:16:30.416074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.470 07:16:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.470 07:16:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:02.471 07:16:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.731 Malloc0 00:07:02.731 07:16:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.992 Malloc1 00:07:02.992 07:16:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.992 07:16:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.992 /dev/nbd0 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.255 1+0 records in 00:07:03.255 1+0 records out 00:07:03.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000100903 s, 40.6 MB/s 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.255 /dev/nbd1 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.255 07:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.255 07:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.255 1+0 records in 00:07:03.255 1+0 records out 00:07:03.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257708 s, 15.9 MB/s 00:07:03.516 07:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.516 07:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:03.516 07:16:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:03.516 07:16:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.516 07:16:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.516 07:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.516 { 00:07:03.516 "nbd_device": "/dev/nbd0", 00:07:03.516 "bdev_name": "Malloc0" 00:07:03.516 }, 00:07:03.516 { 00:07:03.517 "nbd_device": "/dev/nbd1", 00:07:03.517 "bdev_name": "Malloc1" 00:07:03.517 } 00:07:03.517 ]' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.517 { 00:07:03.517 "nbd_device": "/dev/nbd0", 00:07:03.517 "bdev_name": "Malloc0" 00:07:03.517 }, 00:07:03.517 { 00:07:03.517 "nbd_device": "/dev/nbd1", 00:07:03.517 "bdev_name": "Malloc1" 00:07:03.517 } 00:07:03.517 ]' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.517 /dev/nbd1' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.517 /dev/nbd1' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.517 07:16:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.777 256+0 records in 00:07:03.777 256+0 records out 00:07:03.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124468 s, 84.2 MB/s 00:07:03.777 07:16:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.778 256+0 records in 00:07:03.778 256+0 records out 00:07:03.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122148 s, 85.8 MB/s 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.778 256+0 records in 00:07:03.778 256+0 records out 00:07:03.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129915 s, 80.7 MB/s 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.778 07:16:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.039 07:16:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.039 07:16:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.300 07:16:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.300 07:16:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:04.562 07:16:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.562 [2024-11-26 07:16:32.575993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.562 [2024-11-26 07:16:32.607567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.562 [2024-11-26 07:16:32.607568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.562 [2024-11-26 07:16:32.636683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.562 [2024-11-26 07:16:32.636713] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:07.864 07:16:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:07.864 07:16:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:07.864 spdk_app_start Round 1 00:07:07.864 07:16:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227018 /var/tmp/spdk-nbd.sock 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1227018 ']' 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:07.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.864 07:16:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:07.864 07:16:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.864 Malloc0 00:07:07.864 07:16:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:08.124 Malloc1 00:07:08.124 07:16:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.124 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.125 07:16:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:08.385 /dev/nbd0 00:07:08.385 07:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.385 07:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.385 07:16:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.385 07:16:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:08.385 07:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.385 07:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.385 07:16:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.386 1+0 records in 00:07:08.386 1+0 records out 00:07:08.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295157 s, 13.9 MB/s 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.386 07:16:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:08.386 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.386 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.386 07:16:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:08.386 /dev/nbd1 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:08.647 1+0 records in 00:07:08.647 1+0 records out 00:07:08.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212895 s, 19.2 MB/s 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.647 07:16:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.647 { 00:07:08.647 "nbd_device": "/dev/nbd0", 00:07:08.647 "bdev_name": "Malloc0" 00:07:08.647 }, 00:07:08.647 { 00:07:08.647 "nbd_device": "/dev/nbd1", 00:07:08.647 "bdev_name": "Malloc1" 00:07:08.647 } 00:07:08.647 ]' 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.647 { 00:07:08.647 "nbd_device": "/dev/nbd0", 00:07:08.647 "bdev_name": "Malloc0" 00:07:08.647 }, 00:07:08.647 { 00:07:08.647 "nbd_device": "/dev/nbd1", 00:07:08.647 "bdev_name": "Malloc1" 00:07:08.647 } 00:07:08.647 ]' 00:07:08.647 07:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.909 /dev/nbd1' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.909 /dev/nbd1' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.909 256+0 records in 00:07:08.909 256+0 records out 00:07:08.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127595 s, 82.2 MB/s 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.909 256+0 records in 00:07:08.909 256+0 records out 00:07:08.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123032 s, 85.2 MB/s 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.909 256+0 records in 00:07:08.909 256+0 records out 00:07:08.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129239 s, 81.1 MB/s 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.909 07:16:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.171 07:16:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.172 07:16:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.433 07:16:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.433 07:16:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.693 07:16:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.693 [2024-11-26 07:16:37.711736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.693 [2024-11-26 07:16:37.742957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.693 [2024-11-26 07:16:37.742958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.693 [2024-11-26 07:16:37.772676] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.693 [2024-11-26 07:16:37.772708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.996 07:16:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.996 07:16:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:12.996 spdk_app_start Round 2 00:07:12.996 07:16:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1227018 /var/tmp/spdk-nbd.sock 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1227018 ']' 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.996 07:16:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:12.996 07:16:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.996 Malloc0 00:07:12.996 07:16:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.258 Malloc1 00:07:13.258 07:16:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.258 07:16:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.520 /dev/nbd0 00:07:13.520 07:16:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.520 07:16:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.520 1+0 records in 00:07:13.520 1+0 records out 00:07:13.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019957 s, 20.5 MB/s 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.520 07:16:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.520 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.520 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.520 07:16:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.780 /dev/nbd1 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.780 1+0 records in 00:07:13.780 1+0 records out 00:07:13.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210837 s, 19.4 MB/s 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.780 07:16:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.780 { 00:07:13.780 "nbd_device": "/dev/nbd0", 00:07:13.780 "bdev_name": "Malloc0" 00:07:13.780 }, 00:07:13.780 { 00:07:13.780 "nbd_device": "/dev/nbd1", 00:07:13.780 "bdev_name": "Malloc1" 00:07:13.780 } 00:07:13.780 ]' 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.780 { 00:07:13.780 "nbd_device": "/dev/nbd0", 00:07:13.780 "bdev_name": "Malloc0" 00:07:13.780 }, 00:07:13.780 { 00:07:13.780 "nbd_device": "/dev/nbd1", 00:07:13.780 "bdev_name": "Malloc1" 00:07:13.780 } 00:07:13.780 ]' 00:07:13.780 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.039 /dev/nbd1' 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.039 /dev/nbd1' 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.039 256+0 records in 00:07:14.039 256+0 records out 00:07:14.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122332 s, 85.7 MB/s 00:07:14.039 07:16:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.040 256+0 records in 00:07:14.040 256+0 records out 00:07:14.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119955 s, 87.4 MB/s 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.040 256+0 records in 00:07:14.040 256+0 records out 00:07:14.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134613 s, 77.9 MB/s 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.040 07:16:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.301 07:16:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.302 07:16:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.563 07:16:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.563 07:16:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.823 07:16:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.823 [2024-11-26 07:16:42.878638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.823 [2024-11-26 07:16:42.909595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.823 [2024-11-26 07:16:42.909595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.083 [2024-11-26 07:16:42.938976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:15.083 [2024-11-26 07:16:42.939008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.380 07:16:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1227018 /var/tmp/spdk-nbd.sock 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1227018 ']' 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.380 07:16:45 event.app_repeat -- event/event.sh@39 -- # killprocess 1227018 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1227018 ']' 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1227018 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.380 07:16:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227018 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227018' 00:07:18.380 killing process with pid 1227018 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1227018 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1227018 00:07:18.380 spdk_app_start is called in Round 0. 00:07:18.380 Shutdown signal received, stop current app iteration 00:07:18.380 Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 reinitialization... 00:07:18.380 spdk_app_start is called in Round 1. 00:07:18.380 Shutdown signal received, stop current app iteration 00:07:18.380 Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 reinitialization... 00:07:18.380 spdk_app_start is called in Round 2. 00:07:18.380 Shutdown signal received, stop current app iteration 00:07:18.380 Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 reinitialization... 00:07:18.380 spdk_app_start is called in Round 3. 00:07:18.380 Shutdown signal received, stop current app iteration 00:07:18.380 07:16:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:18.380 07:16:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:18.380 00:07:18.380 real 0m15.878s 00:07:18.380 user 0m34.887s 00:07:18.380 sys 0m2.298s 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.380 07:16:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.380 ************************************ 00:07:18.380 END TEST app_repeat 00:07:18.380 ************************************ 00:07:18.380 07:16:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:18.380 07:16:46 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:18.380 07:16:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.380 07:16:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.380 07:16:46 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.380 ************************************ 00:07:18.380 START TEST cpu_locks 00:07:18.380 ************************************ 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:18.380 * Looking for test storage... 00:07:18.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.380 07:16:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.380 --rc genhtml_branch_coverage=1 00:07:18.380 --rc genhtml_function_coverage=1 00:07:18.380 --rc genhtml_legend=1 00:07:18.380 --rc geninfo_all_blocks=1 00:07:18.380 --rc geninfo_unexecuted_blocks=1 00:07:18.380 00:07:18.380 ' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.380 --rc genhtml_branch_coverage=1 00:07:18.380 --rc genhtml_function_coverage=1 00:07:18.380 --rc genhtml_legend=1 00:07:18.380 --rc geninfo_all_blocks=1 00:07:18.380 --rc geninfo_unexecuted_blocks=1 00:07:18.380 00:07:18.380 ' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.380 --rc genhtml_branch_coverage=1 00:07:18.380 --rc genhtml_function_coverage=1 00:07:18.380 --rc genhtml_legend=1 00:07:18.380 --rc geninfo_all_blocks=1 00:07:18.380 --rc geninfo_unexecuted_blocks=1 00:07:18.380 00:07:18.380 ' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.380 --rc genhtml_branch_coverage=1 00:07:18.380 --rc genhtml_function_coverage=1 00:07:18.380 --rc genhtml_legend=1 00:07:18.380 --rc geninfo_all_blocks=1 00:07:18.380 --rc geninfo_unexecuted_blocks=1 00:07:18.380 00:07:18.380 ' 00:07:18.380 07:16:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:18.380 07:16:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:18.380 07:16:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:18.380 07:16:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.380 07:16:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.380 ************************************ 00:07:18.380 START TEST default_locks 00:07:18.380 ************************************ 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1230412 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1230412 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1230412 ']' 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.380 07:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.640 [2024-11-26 07:16:46.518057] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:18.640 [2024-11-26 07:16:46.518119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230412 ] 00:07:18.640 [2024-11-26 07:16:46.604894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.640 [2024-11-26 07:16:46.640202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.579 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.579 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:19.579 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1230412 00:07:19.579 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.579 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1230412 00:07:19.839 lslocks: write error 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1230412 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1230412 ']' 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1230412 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1230412 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1230412' 00:07:19.839 killing process with pid 1230412 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1230412 00:07:19.839 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1230412 00:07:20.154 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1230412 00:07:20.154 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1230412 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1230412 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1230412 ']' 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1230412) - No such process 00:07:20.155 ERROR: process (pid: 1230412) is no longer running 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.155 00:07:20.155 real 0m1.506s 00:07:20.155 user 0m1.641s 00:07:20.155 sys 0m0.519s 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.155 07:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 ************************************ 00:07:20.155 END TEST default_locks 00:07:20.155 ************************************ 00:07:20.155 07:16:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:20.155 07:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.155 07:16:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.155 07:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 ************************************ 00:07:20.155 START TEST default_locks_via_rpc 00:07:20.155 ************************************ 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1230702 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1230702 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1230702 ']' 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.155 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 [2024-11-26 07:16:48.102695] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:20.155 [2024-11-26 07:16:48.102750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230702 ] 00:07:20.155 [2024-11-26 07:16:48.186511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.155 [2024-11-26 07:16:48.222101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1230702 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1230702 00:07:20.828 07:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1230702 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1230702 ']' 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1230702 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1230702 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1230702' 00:07:21.441 killing process with pid 1230702 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1230702 00:07:21.441 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1230702 00:07:21.701 00:07:21.701 real 0m1.564s 00:07:21.701 user 0m1.684s 00:07:21.701 sys 0m0.527s 00:07:21.701 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.701 07:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 ************************************ 00:07:21.701 END TEST default_locks_via_rpc 00:07:21.701 ************************************ 00:07:21.701 07:16:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.701 07:16:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.701 07:16:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.701 07:16:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 ************************************ 00:07:21.701 START TEST non_locking_app_on_locked_coremask 00:07:21.701 ************************************ 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1231066 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1231066 /var/tmp/spdk.sock 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1231066 ']' 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.701 07:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.701 [2024-11-26 07:16:49.742141] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:21.701 [2024-11-26 07:16:49.742211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231066 ] 00:07:21.961 [2024-11-26 07:16:49.829433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.961 [2024-11-26 07:16:49.868555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1231382 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1231382 /var/tmp/spdk2.sock 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1231382 ']' 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.532 07:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.532 [2024-11-26 07:16:50.591997] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:22.532 [2024-11-26 07:16:50.592052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231382 ] 00:07:22.792 [2024-11-26 07:16:50.677997] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.792 [2024-11-26 07:16:50.678024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.792 [2024-11-26 07:16:50.740177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.363 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.363 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.363 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1231066 00:07:23.363 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1231066 00:07:23.363 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.624 lslocks: write error 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1231066 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1231066 ']' 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1231066 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.624 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231066 00:07:23.884 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.884 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.884 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231066' 00:07:23.884 killing process with pid 1231066 00:07:23.884 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1231066 00:07:23.884 07:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1231066 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1231382 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1231382 ']' 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1231382 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231382 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231382' 00:07:24.144 killing process with pid 1231382 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1231382 00:07:24.144 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1231382 00:07:24.405 00:07:24.405 real 0m2.696s 00:07:24.405 user 0m3.035s 00:07:24.405 sys 0m0.803s 00:07:24.405 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.405 07:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.405 ************************************ 00:07:24.405 END TEST non_locking_app_on_locked_coremask 00:07:24.405 ************************************ 00:07:24.405 07:16:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:24.405 07:16:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.405 07:16:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.405 07:16:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.405 ************************************ 00:07:24.405 START TEST locking_app_on_unlocked_coremask 00:07:24.405 ************************************ 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1231752 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1231752 /var/tmp/spdk.sock 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1231752 ']' 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.405 07:16:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.666 [2024-11-26 07:16:52.523716] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:24.666 [2024-11-26 07:16:52.523771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231752 ] 00:07:24.666 [2024-11-26 07:16:52.606272] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.666 [2024-11-26 07:16:52.606295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.666 [2024-11-26 07:16:52.637775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1231858 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1231858 /var/tmp/spdk2.sock 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1231858 ']' 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.237 07:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.499 [2024-11-26 07:16:53.350928] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:25.499 [2024-11-26 07:16:53.350983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231858 ] 00:07:25.499 [2024-11-26 07:16:53.435704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.499 [2024-11-26 07:16:53.494107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.071 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.071 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:26.071 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1231858 00:07:26.071 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1231858 00:07:26.071 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.643 lslocks: write error 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1231752 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1231752 ']' 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1231752 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231752 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231752' 00:07:26.643 killing process with pid 1231752 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1231752 00:07:26.643 07:16:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1231752 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1231858 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1231858 ']' 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1231858 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1231858 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1231858' 00:07:27.216 killing process with pid 1231858 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1231858 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1231858 00:07:27.216 00:07:27.216 real 0m2.825s 00:07:27.216 user 0m3.140s 00:07:27.216 sys 0m0.873s 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.216 07:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.216 ************************************ 00:07:27.216 END TEST locking_app_on_unlocked_coremask 00:07:27.216 ************************************ 00:07:27.478 07:16:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:27.478 07:16:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.478 07:16:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.478 07:16:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.478 ************************************ 00:07:27.478 START TEST locking_app_on_locked_coremask 00:07:27.478 ************************************ 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1232421 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1232421 /var/tmp/spdk.sock 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1232421 ']' 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.478 07:16:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.478 [2024-11-26 07:16:55.415819] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:27.478 [2024-11-26 07:16:55.415879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232421 ] 00:07:27.478 [2024-11-26 07:16:55.500513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.478 [2024-11-26 07:16:55.531562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1232473 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1232473 /var/tmp/spdk2.sock 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1232473 /var/tmp/spdk2.sock 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1232473 /var/tmp/spdk2.sock 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1232473 ']' 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.421 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.421 [2024-11-26 07:16:56.255701] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:28.421 [2024-11-26 07:16:56.255753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232473 ] 00:07:28.421 [2024-11-26 07:16:56.344249] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1232421 has claimed it. 00:07:28.421 [2024-11-26 07:16:56.344284] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1232473) - No such process 00:07:28.993 ERROR: process (pid: 1232473) is no longer running 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1232421 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1232421 00:07:28.993 07:16:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.563 lslocks: write error 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1232421 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1232421 ']' 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1232421 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232421 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232421' 00:07:29.563 killing process with pid 1232421 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1232421 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1232421 00:07:29.563 00:07:29.563 real 0m2.269s 00:07:29.563 user 0m2.574s 00:07:29.563 sys 0m0.639s 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.563 07:16:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.563 ************************************ 00:07:29.563 END TEST locking_app_on_locked_coremask 00:07:29.563 ************************************ 00:07:29.823 07:16:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:29.823 07:16:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.823 07:16:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.823 07:16:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.823 ************************************ 00:07:29.823 START TEST locking_overlapped_coremask 00:07:29.823 ************************************ 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1232841 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1232841 /var/tmp/spdk.sock 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1232841 ']' 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.823 07:16:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.823 [2024-11-26 07:16:57.764109] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:29.823 [2024-11-26 07:16:57.764187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232841 ] 00:07:29.823 [2024-11-26 07:16:57.852465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.823 [2024-11-26 07:16:57.889251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.824 [2024-11-26 07:16:57.889395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.824 [2024-11-26 07:16:57.889397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1233032 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1233032 /var/tmp/spdk2.sock 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1233032 /var/tmp/spdk2.sock 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1233032 /var/tmp/spdk2.sock 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1233032 ']' 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.767 07:16:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.767 [2024-11-26 07:16:58.620591] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:30.767 [2024-11-26 07:16:58.620644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233032 ] 00:07:30.767 [2024-11-26 07:16:58.733134] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1232841 has claimed it. 00:07:30.767 [2024-11-26 07:16:58.733175] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1233032) - No such process 00:07:31.343 ERROR: process (pid: 1233032) is no longer running 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1232841 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1232841 ']' 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1232841 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232841 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232841' 00:07:31.343 killing process with pid 1232841 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1232841 00:07:31.343 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1232841 00:07:31.603 00:07:31.603 real 0m1.782s 00:07:31.603 user 0m5.158s 00:07:31.603 sys 0m0.386s 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.603 ************************************ 00:07:31.603 END TEST locking_overlapped_coremask 00:07:31.603 ************************************ 00:07:31.603 07:16:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:31.603 07:16:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.603 07:16:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.603 07:16:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.603 ************************************ 00:07:31.603 START TEST locking_overlapped_coremask_via_rpc 00:07:31.603 ************************************ 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1233213 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1233213 /var/tmp/spdk.sock 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1233213 ']' 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.603 07:16:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.603 [2024-11-26 07:16:59.621315] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:31.603 [2024-11-26 07:16:59.621364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233213 ] 00:07:31.864 [2024-11-26 07:16:59.705092] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.864 [2024-11-26 07:16:59.705118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.864 [2024-11-26 07:16:59.738441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.864 [2024-11-26 07:16:59.738587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.864 [2024-11-26 07:16:59.738588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1233497 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1233497 /var/tmp/spdk2.sock 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1233497 ']' 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.438 07:17:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.438 [2024-11-26 07:17:00.484377] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:32.438 [2024-11-26 07:17:00.484435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233497 ] 00:07:32.699 [2024-11-26 07:17:00.595755] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.699 [2024-11-26 07:17:00.595788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.699 [2024-11-26 07:17:00.668900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.699 [2024-11-26 07:17:00.672280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.699 [2024-11-26 07:17:00.672281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.270 [2024-11-26 07:17:01.265241] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1233213 has claimed it. 00:07:33.270 request: 00:07:33.270 { 00:07:33.270 "method": "framework_enable_cpumask_locks", 00:07:33.270 "req_id": 1 00:07:33.270 } 00:07:33.270 Got JSON-RPC error response 00:07:33.270 response: 00:07:33.270 { 00:07:33.270 "code": -32603, 00:07:33.270 "message": "Failed to claim CPU core: 2" 00:07:33.270 } 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1233213 /var/tmp/spdk.sock 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1233213 ']' 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.270 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.271 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.271 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.271 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1233497 /var/tmp/spdk2.sock 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1233497 ']' 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.533 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.794 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.794 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:33.794 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:33.794 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.795 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.795 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.795 00:07:33.795 real 0m2.078s 00:07:33.795 user 0m0.847s 00:07:33.795 sys 0m0.160s 00:07:33.795 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.795 07:17:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.795 ************************************ 00:07:33.795 END TEST locking_overlapped_coremask_via_rpc 00:07:33.795 ************************************ 00:07:33.795 07:17:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:33.795 07:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1233213 ]] 00:07:33.795 07:17:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1233213 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1233213 ']' 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1233213 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233213 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233213' 00:07:33.795 killing process with pid 1233213 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1233213 00:07:33.795 07:17:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1233213 00:07:34.056 07:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1233497 ]] 00:07:34.056 07:17:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1233497 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1233497 ']' 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1233497 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233497 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233497' 00:07:34.056 killing process with pid 1233497 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1233497 00:07:34.056 07:17:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1233497 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1233213 ]] 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1233213 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1233213 ']' 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1233213 00:07:34.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1233213) - No such process 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1233213 is not found' 00:07:34.317 Process with pid 1233213 is not found 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1233497 ]] 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1233497 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1233497 ']' 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1233497 00:07:34.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1233497) - No such process 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1233497 is not found' 00:07:34.317 Process with pid 1233497 is not found 00:07:34.317 07:17:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.317 00:07:34.317 real 0m15.973s 00:07:34.317 user 0m28.016s 00:07:34.317 sys 0m4.858s 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.317 07:17:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.317 ************************************ 00:07:34.317 END TEST cpu_locks 00:07:34.317 ************************************ 00:07:34.317 00:07:34.317 real 0m41.897s 00:07:34.317 user 1m22.350s 00:07:34.317 sys 0m8.261s 00:07:34.317 07:17:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.317 07:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.317 ************************************ 00:07:34.317 END TEST event 00:07:34.317 ************************************ 00:07:34.317 07:17:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.317 07:17:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.317 07:17:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.317 07:17:02 -- common/autotest_common.sh@10 -- # set +x 00:07:34.317 ************************************ 00:07:34.317 START TEST thread 00:07:34.317 ************************************ 00:07:34.317 07:17:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:34.317 * Looking for test storage... 00:07:34.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:34.317 07:17:02 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:34.317 07:17:02 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:34.317 07:17:02 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:34.578 07:17:02 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:34.578 07:17:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.578 07:17:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.578 07:17:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.578 07:17:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.578 07:17:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.578 07:17:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.579 07:17:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.579 07:17:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.579 07:17:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.579 07:17:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.579 07:17:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.579 07:17:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:34.579 07:17:02 thread -- scripts/common.sh@345 -- # : 1 00:07:34.579 07:17:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.579 07:17:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.579 07:17:02 thread -- scripts/common.sh@365 -- # decimal 1 00:07:34.579 07:17:02 thread -- scripts/common.sh@353 -- # local d=1 00:07:34.579 07:17:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.579 07:17:02 thread -- scripts/common.sh@355 -- # echo 1 00:07:34.579 07:17:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.579 07:17:02 thread -- scripts/common.sh@366 -- # decimal 2 00:07:34.579 07:17:02 thread -- scripts/common.sh@353 -- # local d=2 00:07:34.579 07:17:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.579 07:17:02 thread -- scripts/common.sh@355 -- # echo 2 00:07:34.579 07:17:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.579 07:17:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.579 07:17:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.579 07:17:02 thread -- scripts/common.sh@368 -- # return 0 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:34.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.579 --rc genhtml_branch_coverage=1 00:07:34.579 --rc genhtml_function_coverage=1 00:07:34.579 --rc genhtml_legend=1 00:07:34.579 --rc geninfo_all_blocks=1 00:07:34.579 --rc geninfo_unexecuted_blocks=1 00:07:34.579 00:07:34.579 ' 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:34.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.579 --rc genhtml_branch_coverage=1 00:07:34.579 --rc genhtml_function_coverage=1 00:07:34.579 --rc genhtml_legend=1 00:07:34.579 --rc geninfo_all_blocks=1 00:07:34.579 --rc geninfo_unexecuted_blocks=1 00:07:34.579 00:07:34.579 ' 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:34.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.579 --rc genhtml_branch_coverage=1 00:07:34.579 --rc genhtml_function_coverage=1 00:07:34.579 --rc genhtml_legend=1 00:07:34.579 --rc geninfo_all_blocks=1 00:07:34.579 --rc geninfo_unexecuted_blocks=1 00:07:34.579 00:07:34.579 ' 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:34.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.579 --rc genhtml_branch_coverage=1 00:07:34.579 --rc genhtml_function_coverage=1 00:07:34.579 --rc genhtml_legend=1 00:07:34.579 --rc geninfo_all_blocks=1 00:07:34.579 --rc geninfo_unexecuted_blocks=1 00:07:34.579 00:07:34.579 ' 00:07:34.579 07:17:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.579 07:17:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.579 ************************************ 00:07:34.579 START TEST thread_poller_perf 00:07:34.579 ************************************ 00:07:34.579 07:17:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.579 [2024-11-26 07:17:02.563568] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:34.579 [2024-11-26 07:17:02.563668] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233996 ] 00:07:34.579 [2024-11-26 07:17:02.650408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.840 [2024-11-26 07:17:02.682329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.840 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:35.783 [2024-11-26T06:17:03.881Z] ====================================== 00:07:35.783 [2024-11-26T06:17:03.881Z] busy:2410046034 (cyc) 00:07:35.783 [2024-11-26T06:17:03.881Z] total_run_count: 419000 00:07:35.783 [2024-11-26T06:17:03.881Z] tsc_hz: 2400000000 (cyc) 00:07:35.783 [2024-11-26T06:17:03.881Z] ====================================== 00:07:35.783 [2024-11-26T06:17:03.881Z] poller_cost: 5751 (cyc), 2396 (nsec) 00:07:35.783 00:07:35.783 real 0m1.175s 00:07:35.783 user 0m1.095s 00:07:35.783 sys 0m0.077s 00:07:35.783 07:17:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.783 07:17:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.783 ************************************ 00:07:35.783 END TEST thread_poller_perf 00:07:35.783 ************************************ 00:07:35.783 07:17:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.783 07:17:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:35.783 07:17:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.783 07:17:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.783 ************************************ 00:07:35.783 START TEST thread_poller_perf 00:07:35.783 ************************************ 00:07:35.783 07:17:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.783 [2024-11-26 07:17:03.816089] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:35.783 [2024-11-26 07:17:03.816181] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234296 ] 00:07:36.045 [2024-11-26 07:17:03.905931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.045 [2024-11-26 07:17:03.937788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.045 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:36.988 [2024-11-26T06:17:05.086Z] ====================================== 00:07:36.988 [2024-11-26T06:17:05.086Z] busy:2401649916 (cyc) 00:07:36.988 [2024-11-26T06:17:05.086Z] total_run_count: 5568000 00:07:36.988 [2024-11-26T06:17:05.086Z] tsc_hz: 2400000000 (cyc) 00:07:36.988 [2024-11-26T06:17:05.086Z] ====================================== 00:07:36.988 [2024-11-26T06:17:05.086Z] poller_cost: 431 (cyc), 179 (nsec) 00:07:36.988 00:07:36.988 real 0m1.170s 00:07:36.988 user 0m1.088s 00:07:36.988 sys 0m0.079s 00:07:36.988 07:17:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.988 07:17:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:36.988 ************************************ 00:07:36.988 END TEST thread_poller_perf 00:07:36.988 ************************************ 00:07:36.988 07:17:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:36.988 00:07:36.988 real 0m2.699s 00:07:36.988 user 0m2.366s 00:07:36.988 sys 0m0.349s 00:07:36.988 07:17:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.988 07:17:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:36.988 ************************************ 00:07:36.988 END TEST thread 00:07:36.988 ************************************ 00:07:36.988 07:17:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:36.988 07:17:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.988 07:17:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.988 07:17:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.988 07:17:05 -- common/autotest_common.sh@10 -- # set +x 00:07:36.988 ************************************ 00:07:37.250 START TEST app_cmdline 00:07:37.250 ************************************ 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.250 * Looking for test storage... 00:07:37.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.250 07:17:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.250 --rc genhtml_branch_coverage=1 00:07:37.250 --rc genhtml_function_coverage=1 00:07:37.250 --rc genhtml_legend=1 00:07:37.250 --rc geninfo_all_blocks=1 00:07:37.250 --rc geninfo_unexecuted_blocks=1 00:07:37.250 00:07:37.250 ' 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.250 --rc genhtml_branch_coverage=1 00:07:37.250 --rc genhtml_function_coverage=1 00:07:37.250 --rc genhtml_legend=1 00:07:37.250 --rc geninfo_all_blocks=1 00:07:37.250 --rc geninfo_unexecuted_blocks=1 00:07:37.250 00:07:37.250 ' 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.250 --rc genhtml_branch_coverage=1 00:07:37.250 --rc genhtml_function_coverage=1 00:07:37.250 --rc genhtml_legend=1 00:07:37.250 --rc geninfo_all_blocks=1 00:07:37.250 --rc geninfo_unexecuted_blocks=1 00:07:37.250 00:07:37.250 ' 00:07:37.250 07:17:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.251 --rc genhtml_branch_coverage=1 00:07:37.251 --rc genhtml_function_coverage=1 00:07:37.251 --rc genhtml_legend=1 00:07:37.251 --rc geninfo_all_blocks=1 00:07:37.251 --rc geninfo_unexecuted_blocks=1 00:07:37.251 00:07:37.251 ' 00:07:37.251 07:17:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.251 07:17:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1234589 00:07:37.251 07:17:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1234589 00:07:37.251 07:17:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1234589 ']' 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.251 07:17:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.251 [2024-11-26 07:17:05.339708] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:37.251 [2024-11-26 07:17:05.339771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234589 ] 00:07:37.511 [2024-11-26 07:17:05.428019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.511 [2024-11-26 07:17:05.469309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.082 07:17:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.082 07:17:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:38.082 07:17:06 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:38.344 { 00:07:38.344 "version": "SPDK v25.01-pre git sha1 9ebbe7008", 00:07:38.344 "fields": { 00:07:38.344 "major": 25, 00:07:38.344 "minor": 1, 00:07:38.344 "patch": 0, 00:07:38.344 "suffix": "-pre", 00:07:38.344 "commit": "9ebbe7008" 00:07:38.344 } 00:07:38.344 } 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.344 07:17:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.344 07:17:06 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.605 request: 00:07:38.605 { 00:07:38.605 "method": "env_dpdk_get_mem_stats", 00:07:38.605 "req_id": 1 00:07:38.605 } 00:07:38.605 Got JSON-RPC error response 00:07:38.605 response: 00:07:38.605 { 00:07:38.605 "code": -32601, 00:07:38.605 "message": "Method not found" 00:07:38.605 } 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.605 07:17:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1234589 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1234589 ']' 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1234589 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1234589 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1234589' 00:07:38.605 killing process with pid 1234589 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@973 -- # kill 1234589 00:07:38.605 07:17:06 app_cmdline -- common/autotest_common.sh@978 -- # wait 1234589 00:07:38.866 00:07:38.866 real 0m1.720s 00:07:38.866 user 0m2.041s 00:07:38.866 sys 0m0.486s 00:07:38.866 07:17:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.866 07:17:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.866 ************************************ 00:07:38.866 END TEST app_cmdline 00:07:38.866 ************************************ 00:07:38.866 07:17:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.866 07:17:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.866 07:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.866 07:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.866 ************************************ 00:07:38.866 START TEST version 00:07:38.866 ************************************ 00:07:38.866 07:17:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.128 * Looking for test storage... 00:07:39.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.128 07:17:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.128 07:17:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.128 07:17:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.128 07:17:07 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.128 07:17:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.128 07:17:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.128 07:17:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.128 07:17:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.128 07:17:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.128 07:17:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.128 07:17:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.128 07:17:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.128 07:17:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.128 07:17:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.128 07:17:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.128 07:17:07 version -- scripts/common.sh@344 -- # case "$op" in 00:07:39.128 07:17:07 version -- scripts/common.sh@345 -- # : 1 00:07:39.128 07:17:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.128 07:17:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.128 07:17:07 version -- scripts/common.sh@365 -- # decimal 1 00:07:39.128 07:17:07 version -- scripts/common.sh@353 -- # local d=1 00:07:39.128 07:17:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.128 07:17:07 version -- scripts/common.sh@355 -- # echo 1 00:07:39.128 07:17:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.128 07:17:07 version -- scripts/common.sh@366 -- # decimal 2 00:07:39.128 07:17:07 version -- scripts/common.sh@353 -- # local d=2 00:07:39.128 07:17:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.128 07:17:07 version -- scripts/common.sh@355 -- # echo 2 00:07:39.128 07:17:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.128 07:17:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.128 07:17:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.128 07:17:07 version -- scripts/common.sh@368 -- # return 0 00:07:39.128 07:17:07 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.128 07:17:07 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.128 --rc genhtml_branch_coverage=1 00:07:39.128 --rc genhtml_function_coverage=1 00:07:39.128 --rc genhtml_legend=1 00:07:39.128 --rc geninfo_all_blocks=1 00:07:39.128 --rc geninfo_unexecuted_blocks=1 00:07:39.128 00:07:39.128 ' 00:07:39.128 07:17:07 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.128 --rc genhtml_branch_coverage=1 00:07:39.128 --rc genhtml_function_coverage=1 00:07:39.128 --rc genhtml_legend=1 00:07:39.128 --rc geninfo_all_blocks=1 00:07:39.128 --rc geninfo_unexecuted_blocks=1 00:07:39.128 00:07:39.128 ' 00:07:39.128 07:17:07 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.128 --rc genhtml_branch_coverage=1 00:07:39.128 --rc genhtml_function_coverage=1 00:07:39.128 --rc genhtml_legend=1 00:07:39.128 --rc geninfo_all_blocks=1 00:07:39.128 --rc geninfo_unexecuted_blocks=1 00:07:39.129 00:07:39.129 ' 00:07:39.129 07:17:07 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.129 --rc genhtml_branch_coverage=1 00:07:39.129 --rc genhtml_function_coverage=1 00:07:39.129 --rc genhtml_legend=1 00:07:39.129 --rc geninfo_all_blocks=1 00:07:39.129 --rc geninfo_unexecuted_blocks=1 00:07:39.129 00:07:39.129 ' 00:07:39.129 07:17:07 version -- app/version.sh@17 -- # get_header_version major 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # cut -f2 00:07:39.129 07:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.129 07:17:07 version -- app/version.sh@17 -- # major=25 00:07:39.129 07:17:07 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.129 07:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # cut -f2 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.129 07:17:07 version -- app/version.sh@18 -- # minor=1 00:07:39.129 07:17:07 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.129 07:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # cut -f2 00:07:39.129 07:17:07 version -- app/version.sh@19 -- # patch=0 00:07:39.129 07:17:07 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.129 07:17:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # cut -f2 00:07:39.129 07:17:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.129 07:17:07 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.129 07:17:07 version -- app/version.sh@22 -- # version=25.1 00:07:39.129 07:17:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.129 07:17:07 version -- app/version.sh@28 -- # version=25.1rc0 00:07:39.129 07:17:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:39.129 07:17:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.129 07:17:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:39.129 07:17:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:39.129 00:07:39.129 real 0m0.284s 00:07:39.129 user 0m0.163s 00:07:39.129 sys 0m0.166s 00:07:39.129 07:17:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.129 07:17:07 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.129 ************************************ 00:07:39.129 END TEST version 00:07:39.129 ************************************ 00:07:39.129 07:17:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:39.129 07:17:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:39.129 07:17:07 -- spdk/autotest.sh@194 -- # uname -s 00:07:39.129 07:17:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:39.129 07:17:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:39.129 07:17:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:39.129 07:17:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:39.129 07:17:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.129 07:17:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.129 07:17:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.129 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:39.390 07:17:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.390 07:17:07 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:39.390 07:17:07 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:39.390 07:17:07 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:39.390 07:17:07 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:39.390 07:17:07 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:39.390 07:17:07 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.390 07:17:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.390 07:17:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.390 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:39.390 ************************************ 00:07:39.390 START TEST nvmf_tcp 00:07:39.390 ************************************ 00:07:39.390 07:17:07 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.390 * Looking for test storage... 00:07:39.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.390 07:17:07 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.390 07:17:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.390 07:17:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.390 07:17:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.390 07:17:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.650 07:17:07 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:39.650 07:17:07 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.650 07:17:07 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.650 --rc genhtml_branch_coverage=1 00:07:39.650 --rc genhtml_function_coverage=1 00:07:39.650 --rc genhtml_legend=1 00:07:39.650 --rc geninfo_all_blocks=1 00:07:39.650 --rc geninfo_unexecuted_blocks=1 00:07:39.650 00:07:39.650 ' 00:07:39.650 07:17:07 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.650 --rc genhtml_branch_coverage=1 00:07:39.650 --rc genhtml_function_coverage=1 00:07:39.650 --rc genhtml_legend=1 00:07:39.650 --rc geninfo_all_blocks=1 00:07:39.650 --rc geninfo_unexecuted_blocks=1 00:07:39.650 00:07:39.650 ' 00:07:39.650 07:17:07 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.650 --rc genhtml_branch_coverage=1 00:07:39.650 --rc genhtml_function_coverage=1 00:07:39.650 --rc genhtml_legend=1 00:07:39.650 --rc geninfo_all_blocks=1 00:07:39.650 --rc geninfo_unexecuted_blocks=1 00:07:39.650 00:07:39.650 ' 00:07:39.650 07:17:07 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.651 --rc genhtml_branch_coverage=1 00:07:39.651 --rc genhtml_function_coverage=1 00:07:39.651 --rc genhtml_legend=1 00:07:39.651 --rc geninfo_all_blocks=1 00:07:39.651 --rc geninfo_unexecuted_blocks=1 00:07:39.651 00:07:39.651 ' 00:07:39.651 07:17:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:39.651 07:17:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.651 07:17:07 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:39.651 07:17:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.651 07:17:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.651 07:17:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.651 ************************************ 00:07:39.651 START TEST nvmf_target_core 00:07:39.651 ************************************ 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:39.651 * Looking for test storage... 00:07:39.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.651 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.912 --rc genhtml_branch_coverage=1 00:07:39.912 --rc genhtml_function_coverage=1 00:07:39.912 --rc genhtml_legend=1 00:07:39.912 --rc geninfo_all_blocks=1 00:07:39.912 --rc geninfo_unexecuted_blocks=1 00:07:39.912 00:07:39.912 ' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.912 --rc genhtml_branch_coverage=1 00:07:39.912 --rc genhtml_function_coverage=1 00:07:39.912 --rc genhtml_legend=1 00:07:39.912 --rc geninfo_all_blocks=1 00:07:39.912 --rc geninfo_unexecuted_blocks=1 00:07:39.912 00:07:39.912 ' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.912 --rc genhtml_branch_coverage=1 00:07:39.912 --rc genhtml_function_coverage=1 00:07:39.912 --rc genhtml_legend=1 00:07:39.912 --rc geninfo_all_blocks=1 00:07:39.912 --rc geninfo_unexecuted_blocks=1 00:07:39.912 00:07:39.912 ' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.912 --rc genhtml_branch_coverage=1 00:07:39.912 --rc genhtml_function_coverage=1 00:07:39.912 --rc genhtml_legend=1 00:07:39.912 --rc geninfo_all_blocks=1 00:07:39.912 --rc geninfo_unexecuted_blocks=1 00:07:39.912 00:07:39.912 ' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.912 07:17:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:39.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:39.913 ************************************ 00:07:39.913 START TEST nvmf_abort 00:07:39.913 ************************************ 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:39.913 * Looking for test storage... 00:07:39.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.913 07:17:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.175 --rc genhtml_branch_coverage=1 00:07:40.175 --rc genhtml_function_coverage=1 00:07:40.175 --rc genhtml_legend=1 00:07:40.175 --rc geninfo_all_blocks=1 00:07:40.175 --rc geninfo_unexecuted_blocks=1 00:07:40.175 00:07:40.175 ' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.175 --rc genhtml_branch_coverage=1 00:07:40.175 --rc genhtml_function_coverage=1 00:07:40.175 --rc genhtml_legend=1 00:07:40.175 --rc geninfo_all_blocks=1 00:07:40.175 --rc geninfo_unexecuted_blocks=1 00:07:40.175 00:07:40.175 ' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.175 --rc genhtml_branch_coverage=1 00:07:40.175 --rc genhtml_function_coverage=1 00:07:40.175 --rc genhtml_legend=1 00:07:40.175 --rc geninfo_all_blocks=1 00:07:40.175 --rc geninfo_unexecuted_blocks=1 00:07:40.175 00:07:40.175 ' 00:07:40.175 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.175 --rc genhtml_branch_coverage=1 00:07:40.175 --rc genhtml_function_coverage=1 00:07:40.175 --rc genhtml_legend=1 00:07:40.175 --rc geninfo_all_blocks=1 00:07:40.176 --rc geninfo_unexecuted_blocks=1 00:07:40.176 00:07:40.176 ' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.176 07:17:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.319 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:48.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:48.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:48.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:48.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:07:48.320 00:07:48.320 --- 10.0.0.2 ping statistics --- 00:07:48.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.320 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:07:48.320 00:07:48.320 --- 10.0.0.1 ping statistics --- 00:07:48.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.320 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1238954 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1238954 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1238954 ']' 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.320 07:17:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.320 [2024-11-26 07:17:15.586206] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:07:48.320 [2024-11-26 07:17:15.586269] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.320 [2024-11-26 07:17:15.686906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.320 [2024-11-26 07:17:15.741384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.320 [2024-11-26 07:17:15.741438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.321 [2024-11-26 07:17:15.741447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.321 [2024-11-26 07:17:15.741454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.321 [2024-11-26 07:17:15.741461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.321 [2024-11-26 07:17:15.743541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.321 [2024-11-26 07:17:15.743702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.321 [2024-11-26 07:17:15.743702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 [2024-11-26 07:17:16.471375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 Malloc0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 Delay0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 [2024-11-26 07:17:16.555769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:48.582 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.583 07:17:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:48.844 [2024-11-26 07:17:16.706960] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:50.760 Initializing NVMe Controllers 00:07:50.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.760 controller IO queue size 128 less than required 00:07:50.760 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:50.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:50.760 Initialization complete. Launching workers. 00:07:50.760 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28535 00:07:50.760 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28596, failed to submit 62 00:07:50.760 success 28539, unsuccessful 57, failed 0 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.760 rmmod nvme_tcp 00:07:50.760 rmmod nvme_fabrics 00:07:50.760 rmmod nvme_keyring 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1238954 ']' 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1238954 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1238954 ']' 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1238954 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.760 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1238954 00:07:51.021 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.021 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1238954' 00:07:51.022 killing process with pid 1238954 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1238954 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1238954 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.022 07:17:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.022 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.022 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.022 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.022 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.022 07:17:19 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.571 00:07:53.571 real 0m13.252s 00:07:53.571 user 0m13.713s 00:07:53.571 sys 0m6.554s 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.571 ************************************ 00:07:53.571 END TEST nvmf_abort 00:07:53.571 ************************************ 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.571 ************************************ 00:07:53.571 START TEST nvmf_ns_hotplug_stress 00:07:53.571 ************************************ 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:53.571 * Looking for test storage... 00:07:53.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.571 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.572 --rc genhtml_branch_coverage=1 00:07:53.572 --rc genhtml_function_coverage=1 00:07:53.572 --rc genhtml_legend=1 00:07:53.572 --rc geninfo_all_blocks=1 00:07:53.572 --rc geninfo_unexecuted_blocks=1 00:07:53.572 00:07:53.572 ' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.572 --rc genhtml_branch_coverage=1 00:07:53.572 --rc genhtml_function_coverage=1 00:07:53.572 --rc genhtml_legend=1 00:07:53.572 --rc geninfo_all_blocks=1 00:07:53.572 --rc geninfo_unexecuted_blocks=1 00:07:53.572 00:07:53.572 ' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.572 --rc genhtml_branch_coverage=1 00:07:53.572 --rc genhtml_function_coverage=1 00:07:53.572 --rc genhtml_legend=1 00:07:53.572 --rc geninfo_all_blocks=1 00:07:53.572 --rc geninfo_unexecuted_blocks=1 00:07:53.572 00:07:53.572 ' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.572 --rc genhtml_branch_coverage=1 00:07:53.572 --rc genhtml_function_coverage=1 00:07:53.572 --rc genhtml_legend=1 00:07:53.572 --rc geninfo_all_blocks=1 00:07:53.572 --rc geninfo_unexecuted_blocks=1 00:07:53.572 00:07:53.572 ' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.572 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.573 07:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.715 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.715 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.715 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.715 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:08:01.715 00:08:01.715 --- 10.0.0.2 ping statistics --- 00:08:01.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.715 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:01.715 00:08:01.715 --- 10.0.0.1 ping statistics --- 00:08:01.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.715 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.715 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1243967 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1243967 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1243967 ']' 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.716 07:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.716 [2024-11-26 07:17:28.999821] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:08:01.716 [2024-11-26 07:17:28.999890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.716 [2024-11-26 07:17:29.098907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.716 [2024-11-26 07:17:29.150151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.716 [2024-11-26 07:17:29.150208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.716 [2024-11-26 07:17:29.150216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.716 [2024-11-26 07:17:29.150224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.716 [2024-11-26 07:17:29.150230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.716 [2024-11-26 07:17:29.152024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.716 [2024-11-26 07:17:29.152214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.716 [2024-11-26 07:17:29.152250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:01.977 07:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.977 [2024-11-26 07:17:30.043929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.238 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:02.238 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.498 [2024-11-26 07:17:30.451104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.499 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.759 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:03.021 Malloc0 00:08:03.021 07:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:03.021 Delay0 00:08:03.021 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.282 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:03.543 NULL1 00:08:03.543 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:03.803 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1244592 00:08:03.803 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:03.803 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.803 07:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:04.743 Read completed with error (sct=0, sc=11) 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 07:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:05.003 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:05.003 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:05.263 true 00:08:05.263 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:05.263 07:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.203 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.203 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:06.203 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:06.463 true 00:08:06.463 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:06.463 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.724 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.724 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:06.724 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:06.984 true 00:08:06.984 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:06.984 07:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:08.368 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:08.368 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:08.368 true 00:08:08.368 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:08.368 07:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.310 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.310 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.569 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:09.569 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:09.569 true 00:08:09.569 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:09.569 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.829 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.139 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:10.139 07:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:10.139 true 00:08:10.139 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:10.139 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.438 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.438 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:10.438 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:10.727 true 00:08:10.727 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:10.727 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.987 07:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.987 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:10.988 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:11.248 true 00:08:11.248 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:11.248 07:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.632 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:12.632 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:12.632 true 00:08:12.632 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:12.632 07:17:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.575 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.836 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:13.836 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:13.836 true 00:08:13.836 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:13.836 07:17:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.096 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.356 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:14.356 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:14.356 true 00:08:14.616 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:14.616 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.616 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.616 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.877 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:14.877 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:14.877 true 00:08:15.137 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:15.137 07:17:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.709 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.970 07:17:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.970 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:15.970 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:16.230 true 00:08:16.230 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:16.230 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.492 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.492 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:16.492 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.752 true 00:08:16.752 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:16.752 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.014 07:17:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.014 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:17.014 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:17.275 true 00:08:17.275 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:17.275 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.536 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.797 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:17.797 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:17.797 true 00:08:17.797 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:17.797 07:17:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 07:17:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.180 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.180 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.440 true 00:08:19.440 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:19.440 07:17:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.382 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.382 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:20.382 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:20.642 true 00:08:20.642 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:20.642 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.642 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.902 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:20.902 07:17:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:21.162 true 00:08:21.162 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:21.162 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.162 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.423 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:21.423 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:21.683 true 00:08:21.683 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:21.683 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.683 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.944 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:21.944 07:17:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:22.205 true 00:08:22.205 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:22.205 07:17:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.590 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:23.590 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:23.590 true 00:08:23.590 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:23.590 07:17:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.532 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.793 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:24.793 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:24.793 true 00:08:24.793 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:24.793 07:17:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.054 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.314 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:25.314 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:25.314 true 00:08:25.314 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:25.314 07:17:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.698 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.699 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:26.699 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:26.959 true 00:08:26.959 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:26.959 07:17:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.901 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.901 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:27.901 07:17:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:28.161 true 00:08:28.161 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:28.161 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.421 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.421 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:28.421 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:28.682 true 00:08:28.682 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:28.682 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.943 07:17:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.204 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:29.204 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:29.204 true 00:08:29.204 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:29.204 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.464 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.725 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:29.725 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:29.725 true 00:08:29.725 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:29.725 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.985 07:17:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.244 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:30.244 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:30.244 true 00:08:30.244 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:30.244 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.504 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.764 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:08:30.764 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:08:30.764 true 00:08:30.764 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:30.764 07:17:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.023 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.282 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:08:31.282 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:08:31.282 true 00:08:31.282 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:31.282 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.541 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.802 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:31.802 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:31.802 true 00:08:32.062 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:32.062 07:17:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.062 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.322 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:32.323 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:32.583 true 00:08:32.583 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:32.583 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.584 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.843 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:32.844 07:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:33.103 true 00:08:33.103 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:33.103 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.103 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.363 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:33.363 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:33.624 true 00:08:33.624 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:33.624 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.884 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.884 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:33.884 07:18:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:33.884 Initializing NVMe Controllers 00:08:33.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.884 Controller IO queue size 128, less than required. 00:08:33.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.884 Controller IO queue size 128, less than required. 00:08:33.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.884 Initialization complete. Launching workers. 00:08:33.884 ======================================================== 00:08:33.884 Latency(us) 00:08:33.884 Device Information : IOPS MiB/s Average min max 00:08:33.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1883.13 0.92 33925.77 1304.69 1089295.06 00:08:33.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15006.90 7.33 8529.89 1139.86 400514.98 00:08:33.884 ======================================================== 00:08:33.884 Total : 16890.03 8.25 11361.36 1139.86 1089295.06 00:08:33.884 00:08:34.143 true 00:08:34.143 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1244592 00:08:34.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1244592) - No such process 00:08:34.144 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1244592 00:08:34.144 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.405 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.666 null0 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.666 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:34.927 null1 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:34.927 null2 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.927 07:18:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:35.187 null3 00:08:35.187 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.187 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.187 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:35.187 null4 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:35.448 null5 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.448 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:35.710 null6 00:08:35.710 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.710 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.710 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:35.973 null7 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:35.973 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1251267 1251269 1251270 1251272 1251274 1251276 1251277 1251280 00:08:35.974 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:35.974 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:35.974 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:35.974 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.974 07:18:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.974 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.974 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.974 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.236 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.497 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.498 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.498 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.498 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.498 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.759 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.760 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.022 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.285 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.548 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.810 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.811 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.072 07:18:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.072 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.333 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.334 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.596 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.859 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.121 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.121 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.121 07:18:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.121 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.383 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.383 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.384 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.648 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.648 rmmod nvme_tcp 00:08:39.648 rmmod nvme_fabrics 00:08:39.910 rmmod nvme_keyring 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1243967 ']' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1243967 ']' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1243967' 00:08:39.910 killing process with pid 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1243967 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.910 07:18:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.458 00:08:42.458 real 0m48.857s 00:08:42.458 user 3m12.773s 00:08:42.458 sys 0m16.017s 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.458 ************************************ 00:08:42.458 END TEST nvmf_ns_hotplug_stress 00:08:42.458 ************************************ 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.458 ************************************ 00:08:42.458 START TEST nvmf_delete_subsystem 00:08:42.458 ************************************ 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:42.458 * Looking for test storage... 00:08:42.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.458 --rc genhtml_branch_coverage=1 00:08:42.458 --rc genhtml_function_coverage=1 00:08:42.458 --rc genhtml_legend=1 00:08:42.458 --rc geninfo_all_blocks=1 00:08:42.458 --rc geninfo_unexecuted_blocks=1 00:08:42.458 00:08:42.458 ' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.458 --rc genhtml_branch_coverage=1 00:08:42.458 --rc genhtml_function_coverage=1 00:08:42.458 --rc genhtml_legend=1 00:08:42.458 --rc geninfo_all_blocks=1 00:08:42.458 --rc geninfo_unexecuted_blocks=1 00:08:42.458 00:08:42.458 ' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.458 --rc genhtml_branch_coverage=1 00:08:42.458 --rc genhtml_function_coverage=1 00:08:42.458 --rc genhtml_legend=1 00:08:42.458 --rc geninfo_all_blocks=1 00:08:42.458 --rc geninfo_unexecuted_blocks=1 00:08:42.458 00:08:42.458 ' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.458 --rc genhtml_branch_coverage=1 00:08:42.458 --rc genhtml_function_coverage=1 00:08:42.458 --rc genhtml_legend=1 00:08:42.458 --rc geninfo_all_blocks=1 00:08:42.458 --rc geninfo_unexecuted_blocks=1 00:08:42.458 00:08:42.458 ' 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.458 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.459 07:18:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.604 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:08:50.605 00:08:50.605 --- 10.0.0.2 ping statistics --- 00:08:50.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.605 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:08:50.605 00:08:50.605 --- 10.0.0.1 ping statistics --- 00:08:50.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.605 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1256910 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1256910 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1256910 ']' 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.605 07:18:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.605 [2024-11-26 07:18:17.929718] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:08:50.605 [2024-11-26 07:18:17.929785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.605 [2024-11-26 07:18:18.030473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.605 [2024-11-26 07:18:18.081769] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.605 [2024-11-26 07:18:18.081819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.605 [2024-11-26 07:18:18.081828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.605 [2024-11-26 07:18:18.081835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.606 [2024-11-26 07:18:18.081841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.606 [2024-11-26 07:18:18.083505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.606 [2024-11-26 07:18:18.083509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 [2024-11-26 07:18:18.793985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 [2024-11-26 07:18:18.818288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 NULL1 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 Delay0 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1257115 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:50.867 07:18:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:50.867 [2024-11-26 07:18:18.945324] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:52.781 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.781 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.781 07:18:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 starting I/O failed: -6 00:08:53.352 Write completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Write completed with error (sct=0, sc=8) 00:08:53.352 starting I/O failed: -6 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Write completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 starting I/O failed: -6 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.352 starting I/O failed: -6 00:08:53.352 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 starting I/O failed: -6 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 [2024-11-26 07:18:21.192948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d62c0 is same with the state(6) to be set 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.353 Write completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 Read completed with error (sct=0, sc=8) 00:08:53.353 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 Write completed with error (sct=0, sc=8) 00:08:53.354 Read completed with error (sct=0, sc=8) 00:08:53.354 starting I/O failed: -6 00:08:53.354 [2024-11-26 07:18:21.197832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f70c4000c40 is same with the state(6) to be set 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:53.354 starting I/O failed: -6 00:08:54.296 [2024-11-26 07:18:22.166653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d79a0 is same with the state(6) to be set 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 [2024-11-26 07:18:22.197133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d64a0 is same with the state(6) to be set 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 [2024-11-26 07:18:22.198045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d6860 is same with the state(6) to be set 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 [2024-11-26 07:18:22.199692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f70c400d7c0 is same with the state(6) to be set 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Write completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.296 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Write completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Write completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Write completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Write completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Write completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 Read completed with error (sct=0, sc=8) 00:08:54.297 [2024-11-26 07:18:22.199817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f70c400d020 is same with the state(6) to be set 00:08:54.297 Initializing NVMe Controllers 00:08:54.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:54.297 Controller IO queue size 128, less than required. 00:08:54.297 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:54.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:54.297 Initialization complete. Launching workers. 00:08:54.297 ======================================================== 00:08:54.297 Latency(us) 00:08:54.297 Device Information : IOPS MiB/s Average min max 00:08:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.62 0.09 896111.66 468.15 1008535.91 00:08:54.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.68 0.09 937190.19 441.33 2001299.42 00:08:54.297 ======================================================== 00:08:54.297 Total : 366.31 0.18 915925.35 441.33 2001299.42 00:08:54.297 00:08:54.297 [2024-11-26 07:18:22.200339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d79a0 (9): Bad file descriptor 00:08:54.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:54.297 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.297 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:54.297 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1257115 00:08:54.297 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1257115 00:08:54.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1257115) - No such process 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1257115 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1257115 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1257115 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 [2024-11-26 07:18:22.729617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1257948 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:54.868 07:18:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.868 [2024-11-26 07:18:22.827485] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:55.476 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.476 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:55.476 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.756 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.756 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:55.756 07:18:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.344 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.344 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:56.344 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.913 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:56.913 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:56.913 07:18:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.483 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.483 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:57.483 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:57.744 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.744 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:57.744 07:18:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.005 Initializing NVMe Controllers 00:08:58.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.005 Controller IO queue size 128, less than required. 00:08:58.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:58.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:58.005 Initialization complete. Launching workers. 00:08:58.005 ======================================================== 00:08:58.005 Latency(us) 00:08:58.005 Device Information : IOPS MiB/s Average min max 00:08:58.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002624.48 1000189.13 1042059.69 00:08:58.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002956.18 1000300.74 1041262.99 00:08:58.005 ======================================================== 00:08:58.005 Total : 256.00 0.12 1002790.33 1000189.13 1042059.69 00:08:58.005 00:08:58.266 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.266 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1257948 00:08:58.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1257948) - No such process 00:08:58.266 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1257948 00:08:58.266 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:58.266 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.267 rmmod nvme_tcp 00:08:58.267 rmmod nvme_fabrics 00:08:58.267 rmmod nvme_keyring 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1256910 ']' 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1256910 00:08:58.267 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1256910 ']' 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1256910 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1256910 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1256910' 00:08:58.527 killing process with pid 1256910 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1256910 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1256910 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.527 07:18:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.075 00:09:01.075 real 0m18.499s 00:09:01.075 user 0m31.233s 00:09:01.075 sys 0m6.890s 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.075 ************************************ 00:09:01.075 END TEST nvmf_delete_subsystem 00:09:01.075 ************************************ 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.075 ************************************ 00:09:01.075 START TEST nvmf_host_management 00:09:01.075 ************************************ 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:01.075 * Looking for test storage... 00:09:01.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.075 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.076 --rc genhtml_branch_coverage=1 00:09:01.076 --rc genhtml_function_coverage=1 00:09:01.076 --rc genhtml_legend=1 00:09:01.076 --rc geninfo_all_blocks=1 00:09:01.076 --rc geninfo_unexecuted_blocks=1 00:09:01.076 00:09:01.076 ' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.076 --rc genhtml_branch_coverage=1 00:09:01.076 --rc genhtml_function_coverage=1 00:09:01.076 --rc genhtml_legend=1 00:09:01.076 --rc geninfo_all_blocks=1 00:09:01.076 --rc geninfo_unexecuted_blocks=1 00:09:01.076 00:09:01.076 ' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.076 --rc genhtml_branch_coverage=1 00:09:01.076 --rc genhtml_function_coverage=1 00:09:01.076 --rc genhtml_legend=1 00:09:01.076 --rc geninfo_all_blocks=1 00:09:01.076 --rc geninfo_unexecuted_blocks=1 00:09:01.076 00:09:01.076 ' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.076 --rc genhtml_branch_coverage=1 00:09:01.076 --rc genhtml_function_coverage=1 00:09:01.076 --rc genhtml_legend=1 00:09:01.076 --rc geninfo_all_blocks=1 00:09:01.076 --rc geninfo_unexecuted_blocks=1 00:09:01.076 00:09:01.076 ' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.076 07:18:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.227 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.228 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:09:09.229 00:09:09.229 --- 10.0.0.2 ping statistics --- 00:09:09.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.229 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:09:09.229 00:09:09.229 --- 10.0.0.1 ping statistics --- 00:09:09.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.229 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1262942 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1262942 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1262942 ']' 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.229 07:18:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.229 [2024-11-26 07:18:36.540095] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:09.229 [2024-11-26 07:18:36.540179] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.229 [2024-11-26 07:18:36.641472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.229 [2024-11-26 07:18:36.694380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.229 [2024-11-26 07:18:36.694424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.229 [2024-11-26 07:18:36.694433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.229 [2024-11-26 07:18:36.694440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.229 [2024-11-26 07:18:36.694447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.229 [2024-11-26 07:18:36.696749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.229 [2024-11-26 07:18:36.696914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.229 [2024-11-26 07:18:36.697054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.229 [2024-11-26 07:18:36.697054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 [2024-11-26 07:18:37.420983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:09.491 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 Malloc0 00:09:09.492 [2024-11-26 07:18:37.499847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1263038 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1263038 /var/tmp/bdevperf.sock 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1263038 ']' 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.492 { 00:09:09.492 "params": { 00:09:09.492 "name": "Nvme$subsystem", 00:09:09.492 "trtype": "$TEST_TRANSPORT", 00:09:09.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.492 "adrfam": "ipv4", 00:09:09.492 "trsvcid": "$NVMF_PORT", 00:09:09.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.492 "hdgst": ${hdgst:-false}, 00:09:09.492 "ddgst": ${ddgst:-false} 00:09:09.492 }, 00:09:09.492 "method": "bdev_nvme_attach_controller" 00:09:09.492 } 00:09:09.492 EOF 00:09:09.492 )") 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:09.492 07:18:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.492 "params": { 00:09:09.492 "name": "Nvme0", 00:09:09.492 "trtype": "tcp", 00:09:09.492 "traddr": "10.0.0.2", 00:09:09.492 "adrfam": "ipv4", 00:09:09.492 "trsvcid": "4420", 00:09:09.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.492 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:09.492 "hdgst": false, 00:09:09.492 "ddgst": false 00:09:09.492 }, 00:09:09.492 "method": "bdev_nvme_attach_controller" 00:09:09.492 }' 00:09:09.754 [2024-11-26 07:18:37.610428] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:09.754 [2024-11-26 07:18:37.610498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263038 ] 00:09:09.754 [2024-11-26 07:18:37.705939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.754 [2024-11-26 07:18:37.760186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.015 Running I/O for 10 seconds... 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:10.589 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.590 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.590 [2024-11-26 07:18:38.508078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041130 is same with the state(6) to be set 00:09:10.590 [2024-11-26 07:18:38.508651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.590 [2024-11-26 07:18:38.508926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.590 [2024-11-26 07:18:38.508933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.508950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.508958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.508967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.508975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.508991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.591 [2024-11-26 07:18:38.509493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.591 [2024-11-26 07:18:38.509503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:10.592 [2024-11-26 07:18:38.509851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.592 [2024-11-26 07:18:38.509886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:09:10.592 [2024-11-26 07:18:38.511146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:10.592 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.592 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:10.592 00:09:10.592 Latency(us) 00:09:10.592 [2024-11-26T06:18:38.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.592 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:10.592 Job: Nvme0n1 ended in about 0.41 seconds with error 00:09:10.592 Verification LBA range: start 0x0 length 0x400 00:09:10.592 Nvme0n1 : 0.41 1546.32 96.64 154.63 0.00 36435.42 3386.03 34297.17 00:09:10.592 [2024-11-26T06:18:38.690Z] =================================================================================================================== 00:09:10.592 [2024-11-26T06:18:38.690Z] Total : 1546.32 96.64 154.63 0.00 36435.42 3386.03 34297.17 00:09:10.592 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:10.592 [2024-11-26 07:18:38.513398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.592 [2024-11-26 07:18:38.513439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc16000 (9): Bad file descriptor 00:09:10.592 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.592 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.593 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.593 07:18:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:10.593 [2024-11-26 07:18:38.575344] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:11.538 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1263038 00:09:11.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1263038) - No such process 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.539 { 00:09:11.539 "params": { 00:09:11.539 "name": "Nvme$subsystem", 00:09:11.539 "trtype": "$TEST_TRANSPORT", 00:09:11.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.539 "adrfam": "ipv4", 00:09:11.539 "trsvcid": "$NVMF_PORT", 00:09:11.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.539 "hdgst": ${hdgst:-false}, 00:09:11.539 "ddgst": ${ddgst:-false} 00:09:11.539 }, 00:09:11.539 "method": "bdev_nvme_attach_controller" 00:09:11.539 } 00:09:11.539 EOF 00:09:11.539 )") 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:11.539 07:18:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.539 "params": { 00:09:11.539 "name": "Nvme0", 00:09:11.539 "trtype": "tcp", 00:09:11.539 "traddr": "10.0.0.2", 00:09:11.539 "adrfam": "ipv4", 00:09:11.539 "trsvcid": "4420", 00:09:11.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:11.539 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:11.539 "hdgst": false, 00:09:11.539 "ddgst": false 00:09:11.539 }, 00:09:11.539 "method": "bdev_nvme_attach_controller" 00:09:11.539 }' 00:09:11.539 [2024-11-26 07:18:39.583144] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:11.539 [2024-11-26 07:18:39.583206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263473 ] 00:09:11.800 [2024-11-26 07:18:39.671418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.800 [2024-11-26 07:18:39.706647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.060 Running I/O for 1 seconds... 00:09:13.003 1536.00 IOPS, 96.00 MiB/s 00:09:13.003 Latency(us) 00:09:13.003 [2024-11-26T06:18:41.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.003 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:13.003 Verification LBA range: start 0x0 length 0x400 00:09:13.004 Nvme0n1 : 1.01 1579.77 98.74 0.00 0.00 39778.86 6635.52 33204.91 00:09:13.004 [2024-11-26T06:18:41.102Z] =================================================================================================================== 00:09:13.004 [2024-11-26T06:18:41.102Z] Total : 1579.77 98.74 0.00 0.00 39778.86 6635.52 33204.91 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.265 rmmod nvme_tcp 00:09:13.265 rmmod nvme_fabrics 00:09:13.265 rmmod nvme_keyring 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1262942 ']' 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1262942 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1262942 ']' 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1262942 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1262942 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1262942' 00:09:13.265 killing process with pid 1262942 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1262942 00:09:13.265 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1262942 00:09:13.526 [2024-11-26 07:18:41.388281] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.526 07:18:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.440 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.440 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:15.440 00:09:15.440 real 0m14.798s 00:09:15.440 user 0m23.711s 00:09:15.440 sys 0m6.801s 00:09:15.440 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.440 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:15.440 ************************************ 00:09:15.440 END TEST nvmf_host_management 00:09:15.440 ************************************ 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.701 ************************************ 00:09:15.701 START TEST nvmf_lvol 00:09:15.701 ************************************ 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:15.701 * Looking for test storage... 00:09:15.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.701 --rc genhtml_branch_coverage=1 00:09:15.701 --rc genhtml_function_coverage=1 00:09:15.701 --rc genhtml_legend=1 00:09:15.701 --rc geninfo_all_blocks=1 00:09:15.701 --rc geninfo_unexecuted_blocks=1 00:09:15.701 00:09:15.701 ' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.701 --rc genhtml_branch_coverage=1 00:09:15.701 --rc genhtml_function_coverage=1 00:09:15.701 --rc genhtml_legend=1 00:09:15.701 --rc geninfo_all_blocks=1 00:09:15.701 --rc geninfo_unexecuted_blocks=1 00:09:15.701 00:09:15.701 ' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.701 --rc genhtml_branch_coverage=1 00:09:15.701 --rc genhtml_function_coverage=1 00:09:15.701 --rc genhtml_legend=1 00:09:15.701 --rc geninfo_all_blocks=1 00:09:15.701 --rc geninfo_unexecuted_blocks=1 00:09:15.701 00:09:15.701 ' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.701 --rc genhtml_branch_coverage=1 00:09:15.701 --rc genhtml_function_coverage=1 00:09:15.701 --rc genhtml_legend=1 00:09:15.701 --rc geninfo_all_blocks=1 00:09:15.701 --rc geninfo_unexecuted_blocks=1 00:09:15.701 00:09:15.701 ' 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.701 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.962 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.962 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:15.962 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.962 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.963 07:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:24.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:24.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.104 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:24.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:24.105 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.105 07:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:09:24.105 00:09:24.105 --- 10.0.0.2 ping statistics --- 00:09:24.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.105 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:09:24.105 00:09:24.105 --- 10.0.0.1 ping statistics --- 00:09:24.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.105 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1268071 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1268071 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1268071 ']' 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.105 07:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.105 [2024-11-26 07:18:51.357560] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:24.105 [2024-11-26 07:18:51.357609] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.105 [2024-11-26 07:18:51.453928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.105 [2024-11-26 07:18:51.489409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.105 [2024-11-26 07:18:51.489444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.105 [2024-11-26 07:18:51.489452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.105 [2024-11-26 07:18:51.489458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.106 [2024-11-26 07:18:51.489464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.106 [2024-11-26 07:18:51.491024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.106 [2024-11-26 07:18:51.491194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.106 [2024-11-26 07:18:51.491212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.106 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.106 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:24.106 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.106 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.106 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:24.366 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.366 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.366 [2024-11-26 07:18:52.379540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.366 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.625 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:24.625 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.887 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:24.887 07:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:25.148 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:25.409 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5a75c9ce-c823-4fb3-bdaf-bcf64035dcb9 00:09:25.409 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a75c9ce-c823-4fb3-bdaf-bcf64035dcb9 lvol 20 00:09:25.409 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=23e9f7f4-be0f-48e7-901d-651533cbc7a8 00:09:25.409 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.670 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 23e9f7f4-be0f-48e7-901d-651533cbc7a8 00:09:25.931 07:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:25.931 [2024-11-26 07:18:53.996239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.191 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.191 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1268771 00:09:26.191 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:26.191 07:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:27.131 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 23e9f7f4-be0f-48e7-901d-651533cbc7a8 MY_SNAPSHOT 00:09:27.390 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=026f91d1-7029-4f47-9ec7-e3495ee0dfdb 00:09:27.390 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 23e9f7f4-be0f-48e7-901d-651533cbc7a8 30 00:09:27.650 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 026f91d1-7029-4f47-9ec7-e3495ee0dfdb MY_CLONE 00:09:27.934 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=aaf0a236-4399-4691-a112-6d4f2cde26da 00:09:27.934 07:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate aaf0a236-4399-4691-a112-6d4f2cde26da 00:09:28.193 07:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1268771 00:09:38.182 Initializing NVMe Controllers 00:09:38.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:38.182 Controller IO queue size 128, less than required. 00:09:38.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:38.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:38.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:38.182 Initialization complete. Launching workers. 00:09:38.182 ======================================================== 00:09:38.182 Latency(us) 00:09:38.182 Device Information : IOPS MiB/s Average min max 00:09:38.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16347.00 63.86 7830.72 1605.73 45084.29 00:09:38.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17456.60 68.19 7333.74 342.90 43204.85 00:09:38.182 ======================================================== 00:09:38.182 Total : 33803.60 132.05 7574.07 342.90 45084.29 00:09:38.182 00:09:38.182 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.182 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 23e9f7f4-be0f-48e7-901d-651533cbc7a8 00:09:38.182 07:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a75c9ce-c823-4fb3-bdaf-bcf64035dcb9 00:09:38.182 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:38.182 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:38.182 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:38.182 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.183 rmmod nvme_tcp 00:09:38.183 rmmod nvme_fabrics 00:09:38.183 rmmod nvme_keyring 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1268071 ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1268071 ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1268071' 00:09:38.183 killing process with pid 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1268071 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.183 07:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.689 00:09:39.689 real 0m23.880s 00:09:39.689 user 1m4.868s 00:09:39.689 sys 0m8.510s 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 ************************************ 00:09:39.689 END TEST nvmf_lvol 00:09:39.689 ************************************ 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 ************************************ 00:09:39.689 START TEST nvmf_lvs_grow 00:09:39.689 ************************************ 00:09:39.689 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.689 * Looking for test storage... 00:09:39.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.690 --rc genhtml_branch_coverage=1 00:09:39.690 --rc genhtml_function_coverage=1 00:09:39.690 --rc genhtml_legend=1 00:09:39.690 --rc geninfo_all_blocks=1 00:09:39.690 --rc geninfo_unexecuted_blocks=1 00:09:39.690 00:09:39.690 ' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.690 --rc genhtml_branch_coverage=1 00:09:39.690 --rc genhtml_function_coverage=1 00:09:39.690 --rc genhtml_legend=1 00:09:39.690 --rc geninfo_all_blocks=1 00:09:39.690 --rc geninfo_unexecuted_blocks=1 00:09:39.690 00:09:39.690 ' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.690 --rc genhtml_branch_coverage=1 00:09:39.690 --rc genhtml_function_coverage=1 00:09:39.690 --rc genhtml_legend=1 00:09:39.690 --rc geninfo_all_blocks=1 00:09:39.690 --rc geninfo_unexecuted_blocks=1 00:09:39.690 00:09:39.690 ' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.690 --rc genhtml_branch_coverage=1 00:09:39.690 --rc genhtml_function_coverage=1 00:09:39.690 --rc genhtml_legend=1 00:09:39.690 --rc geninfo_all_blocks=1 00:09:39.690 --rc geninfo_unexecuted_blocks=1 00:09:39.690 00:09:39.690 ' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.690 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.691 07:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:47.829 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:47.830 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:47.830 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:47.830 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:47.830 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.830 07:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:09:47.830 00:09:47.830 --- 10.0.0.2 ping statistics --- 00:09:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.830 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:09:47.830 00:09:47.830 --- 10.0.0.1 ping statistics --- 00:09:47.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.830 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:47.830 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1275146 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1275146 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1275146 ']' 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.831 07:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:47.831 [2024-11-26 07:19:15.293286] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:47.831 [2024-11-26 07:19:15.293350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.831 [2024-11-26 07:19:15.394717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.831 [2024-11-26 07:19:15.445358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.831 [2024-11-26 07:19:15.445417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.831 [2024-11-26 07:19:15.445426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.831 [2024-11-26 07:19:15.445434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.831 [2024-11-26 07:19:15.445440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.831 [2024-11-26 07:19:15.446234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.093 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.354 [2024-11-26 07:19:16.316376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:48.354 ************************************ 00:09:48.354 START TEST lvs_grow_clean 00:09:48.354 ************************************ 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:48.354 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.615 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:48.615 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:48.876 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:09:48.876 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:09:48.876 07:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 lvol 150 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=43bfd6b2-caf6-4ea1-8230-598ce5dc3efe 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:49.136 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:49.397 [2024-11-26 07:19:17.353717] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:49.397 [2024-11-26 07:19:17.353793] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:49.397 true 00:09:49.397 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:09:49.397 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:49.658 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:49.658 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:49.658 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 43bfd6b2-caf6-4ea1-8230-598ce5dc3efe 00:09:49.919 07:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:50.180 [2024-11-26 07:19:18.076044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.180 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1275854 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1275854 /var/tmp/bdevperf.sock 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1275854 ']' 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.441 07:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:50.441 [2024-11-26 07:19:18.355687] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:09:50.441 [2024-11-26 07:19:18.355758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275854 ] 00:09:50.441 [2024-11-26 07:19:18.447844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.441 [2024-11-26 07:19:18.500065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.384 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.384 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:51.384 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:51.384 Nvme0n1 00:09:51.644 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:51.644 [ 00:09:51.644 { 00:09:51.644 "name": "Nvme0n1", 00:09:51.644 "aliases": [ 00:09:51.644 "43bfd6b2-caf6-4ea1-8230-598ce5dc3efe" 00:09:51.644 ], 00:09:51.644 "product_name": "NVMe disk", 00:09:51.644 "block_size": 4096, 00:09:51.644 "num_blocks": 38912, 00:09:51.644 "uuid": "43bfd6b2-caf6-4ea1-8230-598ce5dc3efe", 00:09:51.644 "numa_id": 0, 00:09:51.644 "assigned_rate_limits": { 00:09:51.644 "rw_ios_per_sec": 0, 00:09:51.644 "rw_mbytes_per_sec": 0, 00:09:51.644 "r_mbytes_per_sec": 0, 00:09:51.644 "w_mbytes_per_sec": 0 00:09:51.644 }, 00:09:51.644 "claimed": false, 00:09:51.644 "zoned": false, 00:09:51.644 "supported_io_types": { 00:09:51.644 "read": true, 00:09:51.644 "write": true, 00:09:51.644 "unmap": true, 00:09:51.644 "flush": true, 00:09:51.644 "reset": true, 00:09:51.644 "nvme_admin": true, 00:09:51.644 "nvme_io": true, 00:09:51.644 "nvme_io_md": false, 00:09:51.644 "write_zeroes": true, 00:09:51.644 "zcopy": false, 00:09:51.644 "get_zone_info": false, 00:09:51.644 "zone_management": false, 00:09:51.644 "zone_append": false, 00:09:51.644 "compare": true, 00:09:51.644 "compare_and_write": true, 00:09:51.644 "abort": true, 00:09:51.644 "seek_hole": false, 00:09:51.644 "seek_data": false, 00:09:51.644 "copy": true, 00:09:51.644 "nvme_iov_md": false 00:09:51.644 }, 00:09:51.644 "memory_domains": [ 00:09:51.644 { 00:09:51.644 "dma_device_id": "system", 00:09:51.644 "dma_device_type": 1 00:09:51.644 } 00:09:51.644 ], 00:09:51.644 "driver_specific": { 00:09:51.644 "nvme": [ 00:09:51.644 { 00:09:51.644 "trid": { 00:09:51.644 "trtype": "TCP", 00:09:51.644 "adrfam": "IPv4", 00:09:51.644 "traddr": "10.0.0.2", 00:09:51.644 "trsvcid": "4420", 00:09:51.644 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:51.644 }, 00:09:51.644 "ctrlr_data": { 00:09:51.644 "cntlid": 1, 00:09:51.644 "vendor_id": "0x8086", 00:09:51.644 "model_number": "SPDK bdev Controller", 00:09:51.644 "serial_number": "SPDK0", 00:09:51.644 "firmware_revision": "25.01", 00:09:51.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:51.644 "oacs": { 00:09:51.645 "security": 0, 00:09:51.645 "format": 0, 00:09:51.645 "firmware": 0, 00:09:51.645 "ns_manage": 0 00:09:51.645 }, 00:09:51.645 "multi_ctrlr": true, 00:09:51.645 "ana_reporting": false 00:09:51.645 }, 00:09:51.645 "vs": { 00:09:51.645 "nvme_version": "1.3" 00:09:51.645 }, 00:09:51.645 "ns_data": { 00:09:51.645 "id": 1, 00:09:51.645 "can_share": true 00:09:51.645 } 00:09:51.645 } 00:09:51.645 ], 00:09:51.645 "mp_policy": "active_passive" 00:09:51.645 } 00:09:51.645 } 00:09:51.645 ] 00:09:51.645 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:51.645 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1276157 00:09:51.645 07:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:51.645 Running I/O for 10 seconds... 00:09:53.027 Latency(us) 00:09:53.027 [2024-11-26T06:19:21.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.027 Nvme0n1 : 1.00 25050.00 97.85 0.00 0.00 0.00 0.00 0.00 00:09:53.027 [2024-11-26T06:19:21.125Z] =================================================================================================================== 00:09:53.027 [2024-11-26T06:19:21.125Z] Total : 25050.00 97.85 0.00 0.00 0.00 0.00 0.00 00:09:53.027 00:09:53.597 07:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:09:53.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.857 Nvme0n1 : 2.00 25236.50 98.58 0.00 0.00 0.00 0.00 0.00 00:09:53.857 [2024-11-26T06:19:21.955Z] =================================================================================================================== 00:09:53.857 [2024-11-26T06:19:21.955Z] Total : 25236.50 98.58 0.00 0.00 0.00 0.00 0.00 00:09:53.857 00:09:53.857 true 00:09:53.858 07:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:09:53.858 07:19:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:54.118 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:54.118 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:54.118 07:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1276157 00:09:54.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.688 Nvme0n1 : 3.00 25325.00 98.93 0.00 0.00 0.00 0.00 0.00 00:09:54.688 [2024-11-26T06:19:22.786Z] =================================================================================================================== 00:09:54.688 [2024-11-26T06:19:22.786Z] Total : 25325.00 98.93 0.00 0.00 0.00 0.00 0.00 00:09:54.688 00:09:55.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.630 Nvme0n1 : 4.00 25381.75 99.15 0.00 0.00 0.00 0.00 0.00 00:09:55.630 [2024-11-26T06:19:23.728Z] =================================================================================================================== 00:09:55.630 [2024-11-26T06:19:23.728Z] Total : 25381.75 99.15 0.00 0.00 0.00 0.00 0.00 00:09:55.630 00:09:57.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.018 Nvme0n1 : 5.00 25412.40 99.27 0.00 0.00 0.00 0.00 0.00 00:09:57.018 [2024-11-26T06:19:25.116Z] =================================================================================================================== 00:09:57.018 [2024-11-26T06:19:25.116Z] Total : 25412.40 99.27 0.00 0.00 0.00 0.00 0.00 00:09:57.018 00:09:57.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.961 Nvme0n1 : 6.00 25443.50 99.39 0.00 0.00 0.00 0.00 0.00 00:09:57.961 [2024-11-26T06:19:26.059Z] =================================================================================================================== 00:09:57.961 [2024-11-26T06:19:26.059Z] Total : 25443.50 99.39 0.00 0.00 0.00 0.00 0.00 00:09:57.961 00:09:58.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.904 Nvme0n1 : 7.00 25468.14 99.48 0.00 0.00 0.00 0.00 0.00 00:09:58.904 [2024-11-26T06:19:27.002Z] =================================================================================================================== 00:09:58.904 [2024-11-26T06:19:27.002Z] Total : 25468.14 99.48 0.00 0.00 0.00 0.00 0.00 00:09:58.904 00:09:59.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.846 Nvme0n1 : 8.00 25490.25 99.57 0.00 0.00 0.00 0.00 0.00 00:09:59.846 [2024-11-26T06:19:27.944Z] =================================================================================================================== 00:09:59.846 [2024-11-26T06:19:27.944Z] Total : 25490.25 99.57 0.00 0.00 0.00 0.00 0.00 00:09:59.846 00:10:00.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.789 Nvme0n1 : 9.00 25502.44 99.62 0.00 0.00 0.00 0.00 0.00 00:10:00.789 [2024-11-26T06:19:28.887Z] =================================================================================================================== 00:10:00.789 [2024-11-26T06:19:28.887Z] Total : 25502.44 99.62 0.00 0.00 0.00 0.00 0.00 00:10:00.789 00:10:01.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.731 Nvme0n1 : 10.00 25518.50 99.68 0.00 0.00 0.00 0.00 0.00 00:10:01.731 [2024-11-26T06:19:29.829Z] =================================================================================================================== 00:10:01.731 [2024-11-26T06:19:29.829Z] Total : 25518.50 99.68 0.00 0.00 0.00 0.00 0.00 00:10:01.731 00:10:01.731 00:10:01.731 Latency(us) 00:10:01.731 [2024-11-26T06:19:29.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.731 Nvme0n1 : 10.00 25517.14 99.68 0.00 0.00 5012.67 2007.04 10704.21 00:10:01.731 [2024-11-26T06:19:29.829Z] =================================================================================================================== 00:10:01.731 [2024-11-26T06:19:29.829Z] Total : 25517.14 99.68 0.00 0.00 5012.67 2007.04 10704.21 00:10:01.731 { 00:10:01.731 "results": [ 00:10:01.731 { 00:10:01.731 "job": "Nvme0n1", 00:10:01.731 "core_mask": "0x2", 00:10:01.731 "workload": "randwrite", 00:10:01.731 "status": "finished", 00:10:01.731 "queue_depth": 128, 00:10:01.731 "io_size": 4096, 00:10:01.731 "runtime": 10.003079, 00:10:01.731 "iops": 25517.14327158668, 00:10:01.731 "mibps": 99.67634090463547, 00:10:01.731 "io_failed": 0, 00:10:01.731 "io_timeout": 0, 00:10:01.731 "avg_latency_us": 5012.665195899445, 00:10:01.731 "min_latency_us": 2007.04, 00:10:01.731 "max_latency_us": 10704.213333333333 00:10:01.731 } 00:10:01.731 ], 00:10:01.731 "core_count": 1 00:10:01.731 } 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1275854 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1275854 ']' 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1275854 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.731 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1275854 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1275854' 00:10:01.993 killing process with pid 1275854 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1275854 00:10:01.993 Received shutdown signal, test time was about 10.000000 seconds 00:10:01.993 00:10:01.993 Latency(us) 00:10:01.993 [2024-11-26T06:19:30.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.993 [2024-11-26T06:19:30.091Z] =================================================================================================================== 00:10:01.993 [2024-11-26T06:19:30.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1275854 00:10:01.993 07:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.253 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:02.253 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:02.253 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:02.514 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:02.514 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:02.514 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.776 [2024-11-26 07:19:30.618340] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:02.776 request: 00:10:02.776 { 00:10:02.776 "uuid": "5babd12a-c8a0-4bc1-9c49-ac7e843bd400", 00:10:02.776 "method": "bdev_lvol_get_lvstores", 00:10:02.776 "req_id": 1 00:10:02.776 } 00:10:02.776 Got JSON-RPC error response 00:10:02.776 response: 00:10:02.776 { 00:10:02.776 "code": -19, 00:10:02.776 "message": "No such device" 00:10:02.776 } 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.776 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.037 aio_bdev 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 43bfd6b2-caf6-4ea1-8230-598ce5dc3efe 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=43bfd6b2-caf6-4ea1-8230-598ce5dc3efe 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.037 07:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:03.298 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 43bfd6b2-caf6-4ea1-8230-598ce5dc3efe -t 2000 00:10:03.298 [ 00:10:03.298 { 00:10:03.298 "name": "43bfd6b2-caf6-4ea1-8230-598ce5dc3efe", 00:10:03.298 "aliases": [ 00:10:03.298 "lvs/lvol" 00:10:03.298 ], 00:10:03.298 "product_name": "Logical Volume", 00:10:03.298 "block_size": 4096, 00:10:03.298 "num_blocks": 38912, 00:10:03.298 "uuid": "43bfd6b2-caf6-4ea1-8230-598ce5dc3efe", 00:10:03.298 "assigned_rate_limits": { 00:10:03.298 "rw_ios_per_sec": 0, 00:10:03.298 "rw_mbytes_per_sec": 0, 00:10:03.298 "r_mbytes_per_sec": 0, 00:10:03.298 "w_mbytes_per_sec": 0 00:10:03.298 }, 00:10:03.298 "claimed": false, 00:10:03.298 "zoned": false, 00:10:03.298 "supported_io_types": { 00:10:03.298 "read": true, 00:10:03.298 "write": true, 00:10:03.298 "unmap": true, 00:10:03.298 "flush": false, 00:10:03.298 "reset": true, 00:10:03.298 "nvme_admin": false, 00:10:03.298 "nvme_io": false, 00:10:03.298 "nvme_io_md": false, 00:10:03.298 "write_zeroes": true, 00:10:03.298 "zcopy": false, 00:10:03.298 "get_zone_info": false, 00:10:03.298 "zone_management": false, 00:10:03.298 "zone_append": false, 00:10:03.298 "compare": false, 00:10:03.298 "compare_and_write": false, 00:10:03.298 "abort": false, 00:10:03.298 "seek_hole": true, 00:10:03.298 "seek_data": true, 00:10:03.298 "copy": false, 00:10:03.298 "nvme_iov_md": false 00:10:03.298 }, 00:10:03.298 "driver_specific": { 00:10:03.298 "lvol": { 00:10:03.298 "lvol_store_uuid": "5babd12a-c8a0-4bc1-9c49-ac7e843bd400", 00:10:03.298 "base_bdev": "aio_bdev", 00:10:03.298 "thin_provision": false, 00:10:03.298 "num_allocated_clusters": 38, 00:10:03.298 "snapshot": false, 00:10:03.298 "clone": false, 00:10:03.298 "esnap_clone": false 00:10:03.298 } 00:10:03.298 } 00:10:03.298 } 00:10:03.298 ] 00:10:03.298 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:03.298 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:03.298 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:03.559 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:03.559 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:03.559 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:03.559 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:03.559 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 43bfd6b2-caf6-4ea1-8230-598ce5dc3efe 00:10:03.820 07:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5babd12a-c8a0-4bc1-9c49-ac7e843bd400 00:10:04.082 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:04.343 00:10:04.343 real 0m15.819s 00:10:04.343 user 0m15.547s 00:10:04.343 sys 0m1.411s 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:04.343 ************************************ 00:10:04.343 END TEST lvs_grow_clean 00:10:04.343 ************************************ 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.343 ************************************ 00:10:04.343 START TEST lvs_grow_dirty 00:10:04.343 ************************************ 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:04.343 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.604 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:04.604 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:04.604 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:04.604 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:04.604 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:04.865 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:04.865 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:04.865 07:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 lvol 150 00:10:05.127 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:05.127 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:05.127 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:05.127 [2024-11-26 07:19:33.166272] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:05.127 [2024-11-26 07:19:33.166315] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:05.127 true 00:10:05.127 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:05.127 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:05.387 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:05.387 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:05.648 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:05.648 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:05.910 [2024-11-26 07:19:33.824208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:05.910 07:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1278964 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1278964 /var/tmp/bdevperf.sock 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1278964 ']' 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:06.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.171 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:06.171 [2024-11-26 07:19:34.056713] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:06.171 [2024-11-26 07:19:34.056764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278964 ] 00:10:06.171 [2024-11-26 07:19:34.137412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.171 [2024-11-26 07:19:34.167226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.116 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.116 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:07.116 07:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:07.377 Nvme0n1 00:10:07.377 07:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:07.377 [ 00:10:07.377 { 00:10:07.377 "name": "Nvme0n1", 00:10:07.377 "aliases": [ 00:10:07.377 "1f4eea8d-a670-4079-b4d9-d10b037a1dae" 00:10:07.377 ], 00:10:07.377 "product_name": "NVMe disk", 00:10:07.377 "block_size": 4096, 00:10:07.377 "num_blocks": 38912, 00:10:07.377 "uuid": "1f4eea8d-a670-4079-b4d9-d10b037a1dae", 00:10:07.377 "numa_id": 0, 00:10:07.377 "assigned_rate_limits": { 00:10:07.377 "rw_ios_per_sec": 0, 00:10:07.377 "rw_mbytes_per_sec": 0, 00:10:07.377 "r_mbytes_per_sec": 0, 00:10:07.377 "w_mbytes_per_sec": 0 00:10:07.377 }, 00:10:07.377 "claimed": false, 00:10:07.377 "zoned": false, 00:10:07.377 "supported_io_types": { 00:10:07.377 "read": true, 00:10:07.377 "write": true, 00:10:07.377 "unmap": true, 00:10:07.377 "flush": true, 00:10:07.377 "reset": true, 00:10:07.377 "nvme_admin": true, 00:10:07.377 "nvme_io": true, 00:10:07.377 "nvme_io_md": false, 00:10:07.377 "write_zeroes": true, 00:10:07.377 "zcopy": false, 00:10:07.377 "get_zone_info": false, 00:10:07.377 "zone_management": false, 00:10:07.377 "zone_append": false, 00:10:07.377 "compare": true, 00:10:07.377 "compare_and_write": true, 00:10:07.377 "abort": true, 00:10:07.377 "seek_hole": false, 00:10:07.377 "seek_data": false, 00:10:07.377 "copy": true, 00:10:07.377 "nvme_iov_md": false 00:10:07.377 }, 00:10:07.377 "memory_domains": [ 00:10:07.377 { 00:10:07.377 "dma_device_id": "system", 00:10:07.377 "dma_device_type": 1 00:10:07.377 } 00:10:07.377 ], 00:10:07.377 "driver_specific": { 00:10:07.377 "nvme": [ 00:10:07.377 { 00:10:07.377 "trid": { 00:10:07.377 "trtype": "TCP", 00:10:07.377 "adrfam": "IPv4", 00:10:07.377 "traddr": "10.0.0.2", 00:10:07.377 "trsvcid": "4420", 00:10:07.377 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:07.377 }, 00:10:07.377 "ctrlr_data": { 00:10:07.377 "cntlid": 1, 00:10:07.377 "vendor_id": "0x8086", 00:10:07.377 "model_number": "SPDK bdev Controller", 00:10:07.377 "serial_number": "SPDK0", 00:10:07.377 "firmware_revision": "25.01", 00:10:07.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:07.377 "oacs": { 00:10:07.377 "security": 0, 00:10:07.377 "format": 0, 00:10:07.377 "firmware": 0, 00:10:07.377 "ns_manage": 0 00:10:07.377 }, 00:10:07.377 "multi_ctrlr": true, 00:10:07.377 "ana_reporting": false 00:10:07.377 }, 00:10:07.377 "vs": { 00:10:07.377 "nvme_version": "1.3" 00:10:07.377 }, 00:10:07.377 "ns_data": { 00:10:07.377 "id": 1, 00:10:07.377 "can_share": true 00:10:07.377 } 00:10:07.377 } 00:10:07.377 ], 00:10:07.377 "mp_policy": "active_passive" 00:10:07.377 } 00:10:07.377 } 00:10:07.377 ] 00:10:07.377 07:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1279306 00:10:07.377 07:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:07.377 07:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:07.638 Running I/O for 10 seconds... 00:10:08.582 Latency(us) 00:10:08.582 [2024-11-26T06:19:36.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.582 Nvme0n1 : 1.00 25081.00 97.97 0.00 0.00 0.00 0.00 0.00 00:10:08.582 [2024-11-26T06:19:36.680Z] =================================================================================================================== 00:10:08.582 [2024-11-26T06:19:36.680Z] Total : 25081.00 97.97 0.00 0.00 0.00 0.00 0.00 00:10:08.582 00:10:09.524 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:09.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.524 Nvme0n1 : 2.00 25272.50 98.72 0.00 0.00 0.00 0.00 0.00 00:10:09.524 [2024-11-26T06:19:37.622Z] =================================================================================================================== 00:10:09.524 [2024-11-26T06:19:37.622Z] Total : 25272.50 98.72 0.00 0.00 0.00 0.00 0.00 00:10:09.524 00:10:09.524 true 00:10:09.785 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:09.785 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:09.785 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:09.785 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:09.785 07:19:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1279306 00:10:10.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.729 Nvme0n1 : 3.00 25332.33 98.95 0.00 0.00 0.00 0.00 0.00 00:10:10.729 [2024-11-26T06:19:38.827Z] =================================================================================================================== 00:10:10.729 [2024-11-26T06:19:38.827Z] Total : 25332.33 98.95 0.00 0.00 0.00 0.00 0.00 00:10:10.729 00:10:11.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.675 Nvme0n1 : 4.00 25390.75 99.18 0.00 0.00 0.00 0.00 0.00 00:10:11.675 [2024-11-26T06:19:39.773Z] =================================================================================================================== 00:10:11.675 [2024-11-26T06:19:39.773Z] Total : 25390.75 99.18 0.00 0.00 0.00 0.00 0.00 00:10:11.675 00:10:12.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.617 Nvme0n1 : 5.00 25428.40 99.33 0.00 0.00 0.00 0.00 0.00 00:10:12.617 [2024-11-26T06:19:40.715Z] =================================================================================================================== 00:10:12.617 [2024-11-26T06:19:40.715Z] Total : 25428.40 99.33 0.00 0.00 0.00 0.00 0.00 00:10:12.617 00:10:13.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.591 Nvme0n1 : 6.00 25443.83 99.39 0.00 0.00 0.00 0.00 0.00 00:10:13.591 [2024-11-26T06:19:41.689Z] =================================================================================================================== 00:10:13.591 [2024-11-26T06:19:41.689Z] Total : 25443.83 99.39 0.00 0.00 0.00 0.00 0.00 00:10:13.591 00:10:14.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.530 Nvme0n1 : 7.00 25474.43 99.51 0.00 0.00 0.00 0.00 0.00 00:10:14.530 [2024-11-26T06:19:42.628Z] =================================================================================================================== 00:10:14.530 [2024-11-26T06:19:42.628Z] Total : 25474.43 99.51 0.00 0.00 0.00 0.00 0.00 00:10:14.530 00:10:15.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.471 Nvme0n1 : 8.00 25487.00 99.56 0.00 0.00 0.00 0.00 0.00 00:10:15.471 [2024-11-26T06:19:43.569Z] =================================================================================================================== 00:10:15.471 [2024-11-26T06:19:43.569Z] Total : 25487.00 99.56 0.00 0.00 0.00 0.00 0.00 00:10:15.471 00:10:16.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.853 Nvme0n1 : 9.00 25504.67 99.63 0.00 0.00 0.00 0.00 0.00 00:10:16.853 [2024-11-26T06:19:44.951Z] =================================================================================================================== 00:10:16.853 [2024-11-26T06:19:44.951Z] Total : 25504.67 99.63 0.00 0.00 0.00 0.00 0.00 00:10:16.853 00:10:17.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.805 Nvme0n1 : 10.00 25521.10 99.69 0.00 0.00 0.00 0.00 0.00 00:10:17.805 [2024-11-26T06:19:45.903Z] =================================================================================================================== 00:10:17.805 [2024-11-26T06:19:45.903Z] Total : 25521.10 99.69 0.00 0.00 0.00 0.00 0.00 00:10:17.805 00:10:17.805 00:10:17.805 Latency(us) 00:10:17.805 [2024-11-26T06:19:45.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.805 Nvme0n1 : 10.00 25523.41 99.70 0.00 0.00 5012.22 2894.51 10704.21 00:10:17.805 [2024-11-26T06:19:45.903Z] =================================================================================================================== 00:10:17.805 [2024-11-26T06:19:45.903Z] Total : 25523.41 99.70 0.00 0.00 5012.22 2894.51 10704.21 00:10:17.805 { 00:10:17.805 "results": [ 00:10:17.805 { 00:10:17.805 "job": "Nvme0n1", 00:10:17.805 "core_mask": "0x2", 00:10:17.805 "workload": "randwrite", 00:10:17.805 "status": "finished", 00:10:17.805 "queue_depth": 128, 00:10:17.805 "io_size": 4096, 00:10:17.805 "runtime": 10.003445, 00:10:17.805 "iops": 25523.407186224347, 00:10:17.805 "mibps": 99.70080932118886, 00:10:17.805 "io_failed": 0, 00:10:17.806 "io_timeout": 0, 00:10:17.806 "avg_latency_us": 5012.215980761548, 00:10:17.806 "min_latency_us": 2894.5066666666667, 00:10:17.806 "max_latency_us": 10704.213333333333 00:10:17.806 } 00:10:17.806 ], 00:10:17.806 "core_count": 1 00:10:17.806 } 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1278964 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1278964 ']' 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1278964 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1278964 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1278964' 00:10:17.806 killing process with pid 1278964 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1278964 00:10:17.806 Received shutdown signal, test time was about 10.000000 seconds 00:10:17.806 00:10:17.806 Latency(us) 00:10:17.806 [2024-11-26T06:19:45.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.806 [2024-11-26T06:19:45.904Z] =================================================================================================================== 00:10:17.806 [2024-11-26T06:19:45.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1278964 00:10:17.806 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.150 07:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.150 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:18.150 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1275146 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1275146 00:10:18.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1275146 Killed "${NVMF_APP[@]}" "$@" 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1281368 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1281368 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1281368 ']' 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.435 07:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:18.435 [2024-11-26 07:19:46.345731] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:18.435 [2024-11-26 07:19:46.345788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.435 [2024-11-26 07:19:46.435399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.435 [2024-11-26 07:19:46.465331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.435 [2024-11-26 07:19:46.465358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.435 [2024-11-26 07:19:46.465363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.435 [2024-11-26 07:19:46.465368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.435 [2024-11-26 07:19:46.465372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.435 [2024-11-26 07:19:46.465815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:19.378 [2024-11-26 07:19:47.327347] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:19.378 [2024-11-26 07:19:47.327417] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:19.378 [2024-11-26 07:19:47.327440] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:19.378 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:19.639 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f4eea8d-a670-4079-b4d9-d10b037a1dae -t 2000 00:10:19.639 [ 00:10:19.639 { 00:10:19.639 "name": "1f4eea8d-a670-4079-b4d9-d10b037a1dae", 00:10:19.639 "aliases": [ 00:10:19.639 "lvs/lvol" 00:10:19.639 ], 00:10:19.639 "product_name": "Logical Volume", 00:10:19.639 "block_size": 4096, 00:10:19.639 "num_blocks": 38912, 00:10:19.639 "uuid": "1f4eea8d-a670-4079-b4d9-d10b037a1dae", 00:10:19.639 "assigned_rate_limits": { 00:10:19.639 "rw_ios_per_sec": 0, 00:10:19.639 "rw_mbytes_per_sec": 0, 00:10:19.639 "r_mbytes_per_sec": 0, 00:10:19.639 "w_mbytes_per_sec": 0 00:10:19.639 }, 00:10:19.639 "claimed": false, 00:10:19.639 "zoned": false, 00:10:19.639 "supported_io_types": { 00:10:19.639 "read": true, 00:10:19.639 "write": true, 00:10:19.639 "unmap": true, 00:10:19.639 "flush": false, 00:10:19.639 "reset": true, 00:10:19.639 "nvme_admin": false, 00:10:19.639 "nvme_io": false, 00:10:19.639 "nvme_io_md": false, 00:10:19.639 "write_zeroes": true, 00:10:19.639 "zcopy": false, 00:10:19.639 "get_zone_info": false, 00:10:19.639 "zone_management": false, 00:10:19.639 "zone_append": false, 00:10:19.639 "compare": false, 00:10:19.639 "compare_and_write": false, 00:10:19.639 "abort": false, 00:10:19.639 "seek_hole": true, 00:10:19.639 "seek_data": true, 00:10:19.639 "copy": false, 00:10:19.639 "nvme_iov_md": false 00:10:19.639 }, 00:10:19.639 "driver_specific": { 00:10:19.639 "lvol": { 00:10:19.639 "lvol_store_uuid": "8fc7a57e-490e-4138-b55e-98e1c5e46438", 00:10:19.639 "base_bdev": "aio_bdev", 00:10:19.639 "thin_provision": false, 00:10:19.639 "num_allocated_clusters": 38, 00:10:19.639 "snapshot": false, 00:10:19.639 "clone": false, 00:10:19.639 "esnap_clone": false 00:10:19.639 } 00:10:19.639 } 00:10:19.639 } 00:10:19.639 ] 00:10:19.639 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:19.639 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:19.639 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:19.899 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:19.900 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:19.900 07:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:20.160 [2024-11-26 07:19:48.155948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:20.160 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:20.420 request: 00:10:20.420 { 00:10:20.420 "uuid": "8fc7a57e-490e-4138-b55e-98e1c5e46438", 00:10:20.420 "method": "bdev_lvol_get_lvstores", 00:10:20.420 "req_id": 1 00:10:20.420 } 00:10:20.420 Got JSON-RPC error response 00:10:20.421 response: 00:10:20.421 { 00:10:20.421 "code": -19, 00:10:20.421 "message": "No such device" 00:10:20.421 } 00:10:20.421 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:20.421 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.421 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.421 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.421 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:20.681 aio_bdev 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.681 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f4eea8d-a670-4079-b4d9-d10b037a1dae -t 2000 00:10:20.943 [ 00:10:20.943 { 00:10:20.943 "name": "1f4eea8d-a670-4079-b4d9-d10b037a1dae", 00:10:20.943 "aliases": [ 00:10:20.943 "lvs/lvol" 00:10:20.943 ], 00:10:20.943 "product_name": "Logical Volume", 00:10:20.943 "block_size": 4096, 00:10:20.943 "num_blocks": 38912, 00:10:20.943 "uuid": "1f4eea8d-a670-4079-b4d9-d10b037a1dae", 00:10:20.943 "assigned_rate_limits": { 00:10:20.943 "rw_ios_per_sec": 0, 00:10:20.943 "rw_mbytes_per_sec": 0, 00:10:20.943 "r_mbytes_per_sec": 0, 00:10:20.943 "w_mbytes_per_sec": 0 00:10:20.943 }, 00:10:20.943 "claimed": false, 00:10:20.943 "zoned": false, 00:10:20.943 "supported_io_types": { 00:10:20.943 "read": true, 00:10:20.943 "write": true, 00:10:20.943 "unmap": true, 00:10:20.943 "flush": false, 00:10:20.943 "reset": true, 00:10:20.943 "nvme_admin": false, 00:10:20.943 "nvme_io": false, 00:10:20.943 "nvme_io_md": false, 00:10:20.943 "write_zeroes": true, 00:10:20.943 "zcopy": false, 00:10:20.943 "get_zone_info": false, 00:10:20.943 "zone_management": false, 00:10:20.943 "zone_append": false, 00:10:20.943 "compare": false, 00:10:20.943 "compare_and_write": false, 00:10:20.943 "abort": false, 00:10:20.943 "seek_hole": true, 00:10:20.943 "seek_data": true, 00:10:20.943 "copy": false, 00:10:20.943 "nvme_iov_md": false 00:10:20.943 }, 00:10:20.943 "driver_specific": { 00:10:20.943 "lvol": { 00:10:20.943 "lvol_store_uuid": "8fc7a57e-490e-4138-b55e-98e1c5e46438", 00:10:20.943 "base_bdev": "aio_bdev", 00:10:20.943 "thin_provision": false, 00:10:20.943 "num_allocated_clusters": 38, 00:10:20.943 "snapshot": false, 00:10:20.943 "clone": false, 00:10:20.943 "esnap_clone": false 00:10:20.943 } 00:10:20.943 } 00:10:20.943 } 00:10:20.943 ] 00:10:20.943 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:20.943 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:20.943 07:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:21.208 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:21.208 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:21.208 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:21.208 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:21.208 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f4eea8d-a670-4079-b4d9-d10b037a1dae 00:10:21.470 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fc7a57e-490e-4138-b55e-98e1c5e46438 00:10:21.470 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.730 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:21.730 00:10:21.730 real 0m17.464s 00:10:21.730 user 0m45.891s 00:10:21.730 sys 0m2.959s 00:10:21.730 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.730 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:21.730 ************************************ 00:10:21.730 END TEST lvs_grow_dirty 00:10:21.730 ************************************ 00:10:21.730 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:21.730 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:21.731 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:21.731 nvmf_trace.0 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.991 rmmod nvme_tcp 00:10:21.991 rmmod nvme_fabrics 00:10:21.991 rmmod nvme_keyring 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1281368 ']' 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1281368 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1281368 ']' 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1281368 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.991 07:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281368 00:10:21.991 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.991 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.991 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281368' 00:10:21.991 killing process with pid 1281368 00:10:21.991 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1281368 00:10:21.991 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1281368 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.252 07:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.163 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.163 00:10:24.163 real 0m44.656s 00:10:24.163 user 1m7.788s 00:10:24.163 sys 0m10.475s 00:10:24.163 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.163 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:24.164 ************************************ 00:10:24.164 END TEST nvmf_lvs_grow 00:10:24.164 ************************************ 00:10:24.164 07:19:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:24.164 07:19:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.164 07:19:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.164 07:19:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.425 ************************************ 00:10:24.425 START TEST nvmf_bdev_io_wait 00:10:24.425 ************************************ 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:24.425 * Looking for test storage... 00:10:24.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.425 --rc genhtml_branch_coverage=1 00:10:24.425 --rc genhtml_function_coverage=1 00:10:24.425 --rc genhtml_legend=1 00:10:24.425 --rc geninfo_all_blocks=1 00:10:24.425 --rc geninfo_unexecuted_blocks=1 00:10:24.425 00:10:24.425 ' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.425 --rc genhtml_branch_coverage=1 00:10:24.425 --rc genhtml_function_coverage=1 00:10:24.425 --rc genhtml_legend=1 00:10:24.425 --rc geninfo_all_blocks=1 00:10:24.425 --rc geninfo_unexecuted_blocks=1 00:10:24.425 00:10:24.425 ' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.425 --rc genhtml_branch_coverage=1 00:10:24.425 --rc genhtml_function_coverage=1 00:10:24.425 --rc genhtml_legend=1 00:10:24.425 --rc geninfo_all_blocks=1 00:10:24.425 --rc geninfo_unexecuted_blocks=1 00:10:24.425 00:10:24.425 ' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.425 --rc genhtml_branch_coverage=1 00:10:24.425 --rc genhtml_function_coverage=1 00:10:24.425 --rc genhtml_legend=1 00:10:24.425 --rc geninfo_all_blocks=1 00:10:24.425 --rc geninfo_unexecuted_blocks=1 00:10:24.425 00:10:24.425 ' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.425 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.426 07:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:32.565 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:32.565 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:32.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:32.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.565 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:10:32.566 00:10:32.566 --- 10.0.0.2 ping statistics --- 00:10:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.566 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:10:32.566 00:10:32.566 --- 10.0.0.1 ping statistics --- 00:10:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.566 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.566 07:19:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1286423 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1286423 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1286423 ']' 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.566 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:32.566 [2024-11-26 07:20:00.112730] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:32.566 [2024-11-26 07:20:00.112796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.566 [2024-11-26 07:20:00.215168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.566 [2024-11-26 07:20:00.270686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.566 [2024-11-26 07:20:00.270740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.566 [2024-11-26 07:20:00.270749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.566 [2024-11-26 07:20:00.270756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.566 [2024-11-26 07:20:00.270762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.566 [2024-11-26 07:20:00.272837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.566 [2024-11-26 07:20:00.272999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.566 [2024-11-26 07:20:00.273127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.566 [2024-11-26 07:20:00.273127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.137 07:20:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.137 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.138 [2024-11-26 07:20:01.068701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.138 Malloc0 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.138 [2024-11-26 07:20:01.134375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1286773 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1286775 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.138 { 00:10:33.138 "params": { 00:10:33.138 "name": "Nvme$subsystem", 00:10:33.138 "trtype": "$TEST_TRANSPORT", 00:10:33.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.138 "adrfam": "ipv4", 00:10:33.138 "trsvcid": "$NVMF_PORT", 00:10:33.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.138 "hdgst": ${hdgst:-false}, 00:10:33.138 "ddgst": ${ddgst:-false} 00:10:33.138 }, 00:10:33.138 "method": "bdev_nvme_attach_controller" 00:10:33.138 } 00:10:33.138 EOF 00:10:33.138 )") 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1286777 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.138 { 00:10:33.138 "params": { 00:10:33.138 "name": "Nvme$subsystem", 00:10:33.138 "trtype": "$TEST_TRANSPORT", 00:10:33.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.138 "adrfam": "ipv4", 00:10:33.138 "trsvcid": "$NVMF_PORT", 00:10:33.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.138 "hdgst": ${hdgst:-false}, 00:10:33.138 "ddgst": ${ddgst:-false} 00:10:33.138 }, 00:10:33.138 "method": "bdev_nvme_attach_controller" 00:10:33.138 } 00:10:33.138 EOF 00:10:33.138 )") 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1286780 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.138 { 00:10:33.138 "params": { 00:10:33.138 "name": "Nvme$subsystem", 00:10:33.138 "trtype": "$TEST_TRANSPORT", 00:10:33.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.138 "adrfam": "ipv4", 00:10:33.138 "trsvcid": "$NVMF_PORT", 00:10:33.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.138 "hdgst": ${hdgst:-false}, 00:10:33.138 "ddgst": ${ddgst:-false} 00:10:33.138 }, 00:10:33.138 "method": "bdev_nvme_attach_controller" 00:10:33.138 } 00:10:33.138 EOF 00:10:33.138 )") 00:10:33.138 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:33.139 { 00:10:33.139 "params": { 00:10:33.139 "name": "Nvme$subsystem", 00:10:33.139 "trtype": "$TEST_TRANSPORT", 00:10:33.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:33.139 "adrfam": "ipv4", 00:10:33.139 "trsvcid": "$NVMF_PORT", 00:10:33.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:33.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:33.139 "hdgst": ${hdgst:-false}, 00:10:33.139 "ddgst": ${ddgst:-false} 00:10:33.139 }, 00:10:33.139 "method": "bdev_nvme_attach_controller" 00:10:33.139 } 00:10:33.139 EOF 00:10:33.139 )") 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1286773 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.139 "params": { 00:10:33.139 "name": "Nvme1", 00:10:33.139 "trtype": "tcp", 00:10:33.139 "traddr": "10.0.0.2", 00:10:33.139 "adrfam": "ipv4", 00:10:33.139 "trsvcid": "4420", 00:10:33.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.139 "hdgst": false, 00:10:33.139 "ddgst": false 00:10:33.139 }, 00:10:33.139 "method": "bdev_nvme_attach_controller" 00:10:33.139 }' 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.139 "params": { 00:10:33.139 "name": "Nvme1", 00:10:33.139 "trtype": "tcp", 00:10:33.139 "traddr": "10.0.0.2", 00:10:33.139 "adrfam": "ipv4", 00:10:33.139 "trsvcid": "4420", 00:10:33.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.139 "hdgst": false, 00:10:33.139 "ddgst": false 00:10:33.139 }, 00:10:33.139 "method": "bdev_nvme_attach_controller" 00:10:33.139 }' 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.139 "params": { 00:10:33.139 "name": "Nvme1", 00:10:33.139 "trtype": "tcp", 00:10:33.139 "traddr": "10.0.0.2", 00:10:33.139 "adrfam": "ipv4", 00:10:33.139 "trsvcid": "4420", 00:10:33.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.139 "hdgst": false, 00:10:33.139 "ddgst": false 00:10:33.139 }, 00:10:33.139 "method": "bdev_nvme_attach_controller" 00:10:33.139 }' 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:33.139 07:20:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:33.139 "params": { 00:10:33.139 "name": "Nvme1", 00:10:33.139 "trtype": "tcp", 00:10:33.139 "traddr": "10.0.0.2", 00:10:33.139 "adrfam": "ipv4", 00:10:33.139 "trsvcid": "4420", 00:10:33.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:33.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:33.139 "hdgst": false, 00:10:33.139 "ddgst": false 00:10:33.139 }, 00:10:33.139 "method": "bdev_nvme_attach_controller" 00:10:33.139 }' 00:10:33.139 [2024-11-26 07:20:01.193973] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:33.139 [2024-11-26 07:20:01.194045] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:33.139 [2024-11-26 07:20:01.194626] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:33.139 [2024-11-26 07:20:01.194694] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:33.139 [2024-11-26 07:20:01.194994] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:33.139 [2024-11-26 07:20:01.195050] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:33.139 [2024-11-26 07:20:01.197791] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:33.139 [2024-11-26 07:20:01.197861] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:33.399 [2024-11-26 07:20:01.417596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.399 [2024-11-26 07:20:01.457205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:33.659 [2024-11-26 07:20:01.512852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.659 [2024-11-26 07:20:01.551495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:33.659 [2024-11-26 07:20:01.606116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.659 [2024-11-26 07:20:01.645115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:33.659 [2024-11-26 07:20:01.673136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.660 [2024-11-26 07:20:01.710556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:33.920 Running I/O for 1 seconds... 00:10:33.920 Running I/O for 1 seconds... 00:10:33.920 Running I/O for 1 seconds... 00:10:33.920 Running I/O for 1 seconds... 00:10:34.864 7653.00 IOPS, 29.89 MiB/s 00:10:34.864 Latency(us) 00:10:34.864 [2024-11-26T06:20:02.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.864 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:34.864 Nvme1n1 : 1.02 7663.45 29.94 0.00 0.00 16565.24 7372.80 26105.17 00:10:34.864 [2024-11-26T06:20:02.962Z] =================================================================================================================== 00:10:34.864 [2024-11-26T06:20:02.962Z] Total : 7663.45 29.94 0.00 0.00 16565.24 7372.80 26105.17 00:10:34.864 10607.00 IOPS, 41.43 MiB/s 00:10:34.864 Latency(us) 00:10:34.864 [2024-11-26T06:20:02.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.864 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:34.864 Nvme1n1 : 1.01 10648.58 41.60 0.00 0.00 11966.69 6280.53 22937.60 00:10:34.864 [2024-11-26T06:20:02.962Z] =================================================================================================================== 00:10:34.864 [2024-11-26T06:20:02.962Z] Total : 10648.58 41.60 0.00 0.00 11966.69 6280.53 22937.60 00:10:34.864 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1286775 00:10:34.864 07:20:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1286777 00:10:34.864 8178.00 IOPS, 31.95 MiB/s 00:10:34.864 Latency(us) 00:10:34.864 [2024-11-26T06:20:02.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.864 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:34.864 Nvme1n1 : 1.01 8305.38 32.44 0.00 0.00 15371.48 3604.48 39976.96 00:10:34.864 [2024-11-26T06:20:02.962Z] =================================================================================================================== 00:10:34.864 [2024-11-26T06:20:02.962Z] Total : 8305.38 32.44 0.00 0.00 15371.48 3604.48 39976.96 00:10:35.125 180640.00 IOPS, 705.62 MiB/s 00:10:35.125 Latency(us) 00:10:35.125 [2024-11-26T06:20:03.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.125 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:35.125 Nvme1n1 : 1.00 180280.41 704.22 0.00 0.00 705.86 305.49 1966.08 00:10:35.125 [2024-11-26T06:20:03.223Z] =================================================================================================================== 00:10:35.125 [2024-11-26T06:20:03.223Z] Total : 180280.41 704.22 0.00 0.00 705.86 305.49 1966.08 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1286780 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.125 rmmod nvme_tcp 00:10:35.125 rmmod nvme_fabrics 00:10:35.125 rmmod nvme_keyring 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1286423 ']' 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1286423 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1286423 ']' 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1286423 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.125 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1286423 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1286423' 00:10:35.386 killing process with pid 1286423 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1286423 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1286423 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.386 07:20:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.932 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.932 00:10:37.932 real 0m13.172s 00:10:37.933 user 0m19.930s 00:10:37.933 sys 0m7.436s 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:37.933 ************************************ 00:10:37.933 END TEST nvmf_bdev_io_wait 00:10:37.933 ************************************ 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.933 ************************************ 00:10:37.933 START TEST nvmf_queue_depth 00:10:37.933 ************************************ 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:37.933 * Looking for test storage... 00:10:37.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.933 --rc genhtml_branch_coverage=1 00:10:37.933 --rc genhtml_function_coverage=1 00:10:37.933 --rc genhtml_legend=1 00:10:37.933 --rc geninfo_all_blocks=1 00:10:37.933 --rc geninfo_unexecuted_blocks=1 00:10:37.933 00:10:37.933 ' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.933 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.934 07:20:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:46.078 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:46.078 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:46.078 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.078 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:46.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.079 07:20:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:10:46.079 00:10:46.079 --- 10.0.0.2 ping statistics --- 00:10:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.079 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:10:46.079 00:10:46.079 --- 10.0.0.1 ping statistics --- 00:10:46.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.079 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1291470 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1291470 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1291470 ']' 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.079 07:20:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.079 [2024-11-26 07:20:13.374238] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:46.079 [2024-11-26 07:20:13.374299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.079 [2024-11-26 07:20:13.479968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.079 [2024-11-26 07:20:13.529801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.079 [2024-11-26 07:20:13.529852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.079 [2024-11-26 07:20:13.529861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.079 [2024-11-26 07:20:13.529869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.079 [2024-11-26 07:20:13.529875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.079 [2024-11-26 07:20:13.530662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 [2024-11-26 07:20:14.248591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 Malloc0 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 [2024-11-26 07:20:14.309529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1291553 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1291553 /var/tmp/bdevperf.sock 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1291553 ']' 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:46.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.342 07:20:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:46.342 [2024-11-26 07:20:14.367212] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:10:46.342 [2024-11-26 07:20:14.367278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291553 ] 00:10:46.604 [2024-11-26 07:20:14.458904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.604 [2024-11-26 07:20:14.513703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.176 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.176 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:47.176 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:47.176 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.176 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:47.437 NVMe0n1 00:10:47.437 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.437 07:20:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:47.437 Running I/O for 10 seconds... 00:10:49.761 9216.00 IOPS, 36.00 MiB/s [2024-11-26T06:20:18.799Z] 10306.50 IOPS, 40.26 MiB/s [2024-11-26T06:20:19.740Z] 10831.67 IOPS, 42.31 MiB/s [2024-11-26T06:20:20.680Z] 11259.25 IOPS, 43.98 MiB/s [2024-11-26T06:20:21.623Z] 11620.80 IOPS, 45.39 MiB/s [2024-11-26T06:20:22.565Z] 11896.17 IOPS, 46.47 MiB/s [2024-11-26T06:20:23.506Z] 12135.71 IOPS, 47.41 MiB/s [2024-11-26T06:20:24.890Z] 12288.62 IOPS, 48.00 MiB/s [2024-11-26T06:20:25.460Z] 12494.78 IOPS, 48.81 MiB/s [2024-11-26T06:20:25.720Z] 12597.40 IOPS, 49.21 MiB/s 00:10:57.622 Latency(us) 00:10:57.622 [2024-11-26T06:20:25.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.622 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:57.622 Verification LBA range: start 0x0 length 0x4000 00:10:57.622 NVMe0n1 : 10.05 12636.81 49.36 0.00 0.00 80771.65 9011.20 67283.63 00:10:57.622 [2024-11-26T06:20:25.720Z] =================================================================================================================== 00:10:57.622 [2024-11-26T06:20:25.720Z] Total : 12636.81 49.36 0.00 0.00 80771.65 9011.20 67283.63 00:10:57.622 { 00:10:57.622 "results": [ 00:10:57.622 { 00:10:57.622 "job": "NVMe0n1", 00:10:57.622 "core_mask": "0x1", 00:10:57.622 "workload": "verify", 00:10:57.622 "status": "finished", 00:10:57.622 "verify_range": { 00:10:57.622 "start": 0, 00:10:57.622 "length": 16384 00:10:57.622 }, 00:10:57.622 "queue_depth": 1024, 00:10:57.622 "io_size": 4096, 00:10:57.622 "runtime": 10.046445, 00:10:57.622 "iops": 12636.808343647926, 00:10:57.622 "mibps": 49.36253259237471, 00:10:57.622 "io_failed": 0, 00:10:57.622 "io_timeout": 0, 00:10:57.622 "avg_latency_us": 80771.65223037034, 00:10:57.622 "min_latency_us": 9011.2, 00:10:57.622 "max_latency_us": 67283.62666666666 00:10:57.622 } 00:10:57.622 ], 00:10:57.622 "core_count": 1 00:10:57.622 } 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1291553 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1291553 ']' 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1291553 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1291553 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1291553' 00:10:57.622 killing process with pid 1291553 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1291553 00:10:57.622 Received shutdown signal, test time was about 10.000000 seconds 00:10:57.622 00:10:57.622 Latency(us) 00:10:57.622 [2024-11-26T06:20:25.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.622 [2024-11-26T06:20:25.720Z] =================================================================================================================== 00:10:57.622 [2024-11-26T06:20:25.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1291553 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.622 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.622 rmmod nvme_tcp 00:10:57.622 rmmod nvme_fabrics 00:10:57.882 rmmod nvme_keyring 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1291470 ']' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1291470 ']' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1291470' 00:10:57.882 killing process with pid 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1291470 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.882 07:20:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:00.507 00:11:00.507 real 0m22.491s 00:11:00.507 user 0m25.599s 00:11:00.507 sys 0m7.221s 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:00.507 ************************************ 00:11:00.507 END TEST nvmf_queue_depth 00:11:00.507 ************************************ 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.507 ************************************ 00:11:00.507 START TEST nvmf_target_multipath 00:11:00.507 ************************************ 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:00.507 * Looking for test storage... 00:11:00.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.507 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.507 --rc genhtml_branch_coverage=1 00:11:00.507 --rc genhtml_function_coverage=1 00:11:00.507 --rc genhtml_legend=1 00:11:00.507 --rc geninfo_all_blocks=1 00:11:00.507 --rc geninfo_unexecuted_blocks=1 00:11:00.507 00:11:00.508 ' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.508 --rc genhtml_branch_coverage=1 00:11:00.508 --rc genhtml_function_coverage=1 00:11:00.508 --rc genhtml_legend=1 00:11:00.508 --rc geninfo_all_blocks=1 00:11:00.508 --rc geninfo_unexecuted_blocks=1 00:11:00.508 00:11:00.508 ' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.508 --rc genhtml_branch_coverage=1 00:11:00.508 --rc genhtml_function_coverage=1 00:11:00.508 --rc genhtml_legend=1 00:11:00.508 --rc geninfo_all_blocks=1 00:11:00.508 --rc geninfo_unexecuted_blocks=1 00:11:00.508 00:11:00.508 ' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.508 --rc genhtml_branch_coverage=1 00:11:00.508 --rc genhtml_function_coverage=1 00:11:00.508 --rc genhtml_legend=1 00:11:00.508 --rc geninfo_all_blocks=1 00:11:00.508 --rc geninfo_unexecuted_blocks=1 00:11:00.508 00:11:00.508 ' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.508 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.509 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.509 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.509 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.509 07:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:08.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:08.648 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:08.648 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:08.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.648 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:11:08.649 00:11:08.649 --- 10.0.0.2 ping statistics --- 00:11:08.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.649 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:11:08.649 00:11:08.649 --- 10.0.0.1 ping statistics --- 00:11:08.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.649 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:08.649 only one NIC for nvmf test 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.649 rmmod nvme_tcp 00:11:08.649 rmmod nvme_fabrics 00:11:08.649 rmmod nvme_keyring 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.649 07:20:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.036 07:20:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.036 00:11:10.036 real 0m9.907s 00:11:10.036 user 0m2.149s 00:11:10.036 sys 0m5.719s 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:10.036 ************************************ 00:11:10.036 END TEST nvmf_target_multipath 00:11:10.036 ************************************ 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.036 ************************************ 00:11:10.036 START TEST nvmf_zcopy 00:11:10.036 ************************************ 00:11:10.036 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:10.298 * Looking for test storage... 00:11:10.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.298 --rc genhtml_branch_coverage=1 00:11:10.298 --rc genhtml_function_coverage=1 00:11:10.298 --rc genhtml_legend=1 00:11:10.298 --rc geninfo_all_blocks=1 00:11:10.298 --rc geninfo_unexecuted_blocks=1 00:11:10.298 00:11:10.298 ' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.298 --rc genhtml_branch_coverage=1 00:11:10.298 --rc genhtml_function_coverage=1 00:11:10.298 --rc genhtml_legend=1 00:11:10.298 --rc geninfo_all_blocks=1 00:11:10.298 --rc geninfo_unexecuted_blocks=1 00:11:10.298 00:11:10.298 ' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.298 --rc genhtml_branch_coverage=1 00:11:10.298 --rc genhtml_function_coverage=1 00:11:10.298 --rc genhtml_legend=1 00:11:10.298 --rc geninfo_all_blocks=1 00:11:10.298 --rc geninfo_unexecuted_blocks=1 00:11:10.298 00:11:10.298 ' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.298 --rc genhtml_branch_coverage=1 00:11:10.298 --rc genhtml_function_coverage=1 00:11:10.298 --rc genhtml_legend=1 00:11:10.298 --rc geninfo_all_blocks=1 00:11:10.298 --rc geninfo_unexecuted_blocks=1 00:11:10.298 00:11:10.298 ' 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:10.298 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.299 07:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:18.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:18.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.449 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:18.450 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:18.450 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:11:18.450 00:11:18.450 --- 10.0.0.2 ping statistics --- 00:11:18.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.450 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:18.450 00:11:18.450 --- 10.0.0.1 ping statistics --- 00:11:18.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.450 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1302278 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1302278 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1302278 ']' 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.450 07:20:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.450 [2024-11-26 07:20:45.910099] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:11:18.451 [2024-11-26 07:20:45.910172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.451 [2024-11-26 07:20:46.009759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.451 [2024-11-26 07:20:46.059899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.451 [2024-11-26 07:20:46.059953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.451 [2024-11-26 07:20:46.059962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.451 [2024-11-26 07:20:46.059969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.451 [2024-11-26 07:20:46.059975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.451 [2024-11-26 07:20:46.060773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.713 [2024-11-26 07:20:46.795104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.713 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 [2024-11-26 07:20:46.819415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 malloc0 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.974 { 00:11:18.974 "params": { 00:11:18.974 "name": "Nvme$subsystem", 00:11:18.974 "trtype": "$TEST_TRANSPORT", 00:11:18.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.974 "adrfam": "ipv4", 00:11:18.974 "trsvcid": "$NVMF_PORT", 00:11:18.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.974 "hdgst": ${hdgst:-false}, 00:11:18.974 "ddgst": ${ddgst:-false} 00:11:18.974 }, 00:11:18.974 "method": "bdev_nvme_attach_controller" 00:11:18.974 } 00:11:18.974 EOF 00:11:18.974 )") 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:18.974 07:20:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.974 "params": { 00:11:18.974 "name": "Nvme1", 00:11:18.974 "trtype": "tcp", 00:11:18.974 "traddr": "10.0.0.2", 00:11:18.974 "adrfam": "ipv4", 00:11:18.974 "trsvcid": "4420", 00:11:18.974 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.974 "hdgst": false, 00:11:18.974 "ddgst": false 00:11:18.974 }, 00:11:18.974 "method": "bdev_nvme_attach_controller" 00:11:18.974 }' 00:11:18.974 [2024-11-26 07:20:46.928471] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:11:18.974 [2024-11-26 07:20:46.928540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302552 ] 00:11:18.974 [2024-11-26 07:20:47.023018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.236 [2024-11-26 07:20:47.075691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.236 Running I/O for 10 seconds... 00:11:21.562 6440.00 IOPS, 50.31 MiB/s [2024-11-26T06:20:50.600Z] 7472.50 IOPS, 58.38 MiB/s [2024-11-26T06:20:51.541Z] 8196.33 IOPS, 64.03 MiB/s [2024-11-26T06:20:52.482Z] 8590.75 IOPS, 67.12 MiB/s [2024-11-26T06:20:53.426Z] 8829.20 IOPS, 68.98 MiB/s [2024-11-26T06:20:54.366Z] 8984.50 IOPS, 70.19 MiB/s [2024-11-26T06:20:55.749Z] 9095.14 IOPS, 71.06 MiB/s [2024-11-26T06:20:56.320Z] 9179.88 IOPS, 71.72 MiB/s [2024-11-26T06:20:57.706Z] 9245.89 IOPS, 72.23 MiB/s [2024-11-26T06:20:57.706Z] 9293.80 IOPS, 72.61 MiB/s 00:11:29.608 Latency(us) 00:11:29.608 [2024-11-26T06:20:57.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.608 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:29.608 Verification LBA range: start 0x0 length 0x1000 00:11:29.608 Nvme1n1 : 10.01 9295.85 72.62 0.00 0.00 13723.44 2402.99 28180.48 00:11:29.608 [2024-11-26T06:20:57.706Z] =================================================================================================================== 00:11:29.608 [2024-11-26T06:20:57.706Z] Total : 9295.85 72.62 0.00 0.00 13723.44 2402.99 28180.48 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1304570 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.608 { 00:11:29.608 "params": { 00:11:29.608 "name": "Nvme$subsystem", 00:11:29.608 "trtype": "$TEST_TRANSPORT", 00:11:29.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.608 "adrfam": "ipv4", 00:11:29.608 "trsvcid": "$NVMF_PORT", 00:11:29.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.608 "hdgst": ${hdgst:-false}, 00:11:29.608 "ddgst": ${ddgst:-false} 00:11:29.608 }, 00:11:29.608 "method": "bdev_nvme_attach_controller" 00:11:29.608 } 00:11:29.608 EOF 00:11:29.608 )") 00:11:29.608 [2024-11-26 07:20:57.431530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.608 [2024-11-26 07:20:57.431562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:29.608 07:20:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.608 "params": { 00:11:29.608 "name": "Nvme1", 00:11:29.608 "trtype": "tcp", 00:11:29.608 "traddr": "10.0.0.2", 00:11:29.608 "adrfam": "ipv4", 00:11:29.608 "trsvcid": "4420", 00:11:29.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.608 "hdgst": false, 00:11:29.608 "ddgst": false 00:11:29.608 }, 00:11:29.608 "method": "bdev_nvme_attach_controller" 00:11:29.608 }' 00:11:29.608 [2024-11-26 07:20:57.443520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.443530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.455548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.455556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.467579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.467586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.476861] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:11:29.609 [2024-11-26 07:20:57.476911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304570 ] 00:11:29.609 [2024-11-26 07:20:57.479611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.479624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.491643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.491651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.503672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.503679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.515702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.515710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.527733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.527741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.539763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.539771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.551794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.551801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.557813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.609 [2024-11-26 07:20:57.563826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.563834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.575854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.575863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.587318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.609 [2024-11-26 07:20:57.587885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.587894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.599921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.599931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.611951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.611963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.623978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.623988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.636009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.636019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.648037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.648044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.660083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.660102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.672101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.672111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.684134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.684144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.609 [2024-11-26 07:20:57.696170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.609 [2024-11-26 07:20:57.696186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.708198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.708210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.720241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.720259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 Running I/O for 5 seconds... 00:11:29.870 [2024-11-26 07:20:57.732262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.732271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.746501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.746519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.760358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.760375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.773064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.773081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.786431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.786447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.799689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.799705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.812471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.812488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.825164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.825180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.838921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.838938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.851903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.851919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.865287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.865303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.877809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.877825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.890584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.890599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.904050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.904066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.917491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.917507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.930622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.930638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.943929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.943948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.870 [2024-11-26 07:20:57.957344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.870 [2024-11-26 07:20:57.957360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:57.970628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:57.970644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:57.984219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:57.984235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:57.997465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:57.997480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.010886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.010901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.023774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.023789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.036873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.036889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.050346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.050362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.063765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.063781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.077280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.077296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.091033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.091048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.104458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.104473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.117574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.117590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.131060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.131076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.143979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.131 [2024-11-26 07:20:58.143995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.131 [2024-11-26 07:20:58.156094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.156110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.132 [2024-11-26 07:20:58.169636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.169651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.132 [2024-11-26 07:20:58.182305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.182320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.132 [2024-11-26 07:20:58.195546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.195561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.132 [2024-11-26 07:20:58.208626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.208642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.132 [2024-11-26 07:20:58.222413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.132 [2024-11-26 07:20:58.222430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.234879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.234895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.247777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.247793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.260824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.260839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.274010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.274026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.287170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.287186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.299923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.299938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.312578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.312593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.324955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.324971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.338419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.338435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.351603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.351618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.365182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.365197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.378260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.378276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.391771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.391786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.404857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.404872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.417496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.417511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.393 [2024-11-26 07:20:58.430833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.393 [2024-11-26 07:20:58.430849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.394 [2024-11-26 07:20:58.444012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.394 [2024-11-26 07:20:58.444027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.394 [2024-11-26 07:20:58.457498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.394 [2024-11-26 07:20:58.457513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.394 [2024-11-26 07:20:58.470244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.394 [2024-11-26 07:20:58.470258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.394 [2024-11-26 07:20:58.483430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.394 [2024-11-26 07:20:58.483444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.497061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.497076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.510249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.510264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.522844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.522859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.535122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.535137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.548088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.548102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.560981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.560996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.574402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.574417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.586836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.586851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.599371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.599385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.611802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.611817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.625210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.625224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.638563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.638577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.652029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.652044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.665283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.665298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.678665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.678680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.692167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.692182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.705179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.705194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.718424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.718439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 19139.00 IOPS, 149.52 MiB/s [2024-11-26T06:20:58.753Z] [2024-11-26 07:20:58.731896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.731912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.655 [2024-11-26 07:20:58.744711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.655 [2024-11-26 07:20:58.744726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.758003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.758018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.771174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.771189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.783808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.783823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.797296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.797311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.810426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.810442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.824036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.824053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.836653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.836669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.849951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.849966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.862898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.862913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.875667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.916 [2024-11-26 07:20:58.875683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.916 [2024-11-26 07:20:58.887814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.887829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.901392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.901407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.914019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.914034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.927196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.927215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.940894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.940910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.953314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.953330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.965648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.965663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.978782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.978797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:58.990821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:58.990836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.917 [2024-11-26 07:20:59.003190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.917 [2024-11-26 07:20:59.003205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.016757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.016772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.029678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.029692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.043246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.043261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.056563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.056579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.069786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.069801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.083297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.083312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.096279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.096294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.109451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.109466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.122644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.122659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.135970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.135985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.149305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.149320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.161744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.161758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.174859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.174878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.187165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.178 [2024-11-26 07:20:59.187180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.178 [2024-11-26 07:20:59.200613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.200628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.179 [2024-11-26 07:20:59.213437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.213452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.179 [2024-11-26 07:20:59.226854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.226868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.179 [2024-11-26 07:20:59.239582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.239597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.179 [2024-11-26 07:20:59.252326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.252341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.179 [2024-11-26 07:20:59.265612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.179 [2024-11-26 07:20:59.265627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.278759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.278775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.292078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.292092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.305497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.305512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.318750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.318765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.331534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.331548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.344512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.344527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.358003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.358017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.370764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.370779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.384156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.384177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.397455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.397470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.410686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.410701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.424299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.424319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.437397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.437414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.450543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.450558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.463435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.463451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.476474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.476489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.489332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.489348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.502247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.502262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.514951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.514966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.439 [2024-11-26 07:20:59.528243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.439 [2024-11-26 07:20:59.528258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.700 [2024-11-26 07:20:59.541741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.700 [2024-11-26 07:20:59.541756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.555267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.555282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.568966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.568981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.582037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.582052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.594990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.595005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.608470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.608485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.621652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.621667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.635094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.635109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.647904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.647919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.661223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.661239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.674145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.674169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.686618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.686634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.699895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.699910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.712712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.712727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.726409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.726425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 19220.50 IOPS, 150.16 MiB/s [2024-11-26T06:20:59.799Z] [2024-11-26 07:20:59.739748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.739764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.752757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.752773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.765597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.765613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.779263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.779279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.701 [2024-11-26 07:20:59.792729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.701 [2024-11-26 07:20:59.792745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.806093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.806109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.819578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.819593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.832449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.832464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.845022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.845037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.858431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.858446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.871396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.871411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.885132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.885147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.898797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.898813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.911631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.911647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.924008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.924024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.937247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.937263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.950585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.950600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.963592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.963607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.977373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.977388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:20:59.989827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:20:59.989843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:21:00.003833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:21:00.003850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:21:00.015328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:21:00.015344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:21:00.028781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:21:00.028798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:21:00.042406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:21:00.042421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.962 [2024-11-26 07:21:00.054807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.962 [2024-11-26 07:21:00.054823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.068537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.068553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.081429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.081445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.095020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.095036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.108377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.108393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.121686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.121702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.134939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.134955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.148398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.148414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.162051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.162066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.174645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.174660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.187517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.187532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.201259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.201274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.214074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.214089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.227574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.227589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.239983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.239998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.252534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.252549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.264981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.264996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.277757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.277773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.291271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.291287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.304299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.304314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.224 [2024-11-26 07:21:00.316798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.224 [2024-11-26 07:21:00.316813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.329411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.329427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.342008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.342022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.354825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.354840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.368131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.368147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.381308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.381323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.394156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.394176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.407519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.407538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.420456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.420471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.433467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.433482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.446713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.446728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.460142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.460162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.472862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.472877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.485636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.485651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.499229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.499244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.511984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.511999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.524994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.525009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.538435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.538450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.551728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.551742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.565045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.565060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.486 [2024-11-26 07:21:00.577919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.486 [2024-11-26 07:21:00.577934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.590868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.590883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.603442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.603457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.616889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.616904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.629726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.629741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.642908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.642923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.656030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.656049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.669099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.669114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.682482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.682497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.695699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.695713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.708836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.708852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.722026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.722041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.735420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.735435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 19241.67 IOPS, 150.33 MiB/s [2024-11-26T06:21:00.845Z] [2024-11-26 07:21:00.748768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.748783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.761421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.761436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.775011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.775027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.787505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.787520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.800302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.800317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.812785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.812800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.826058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.826073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.747 [2024-11-26 07:21:00.839347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.747 [2024-11-26 07:21:00.839362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.852736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.852752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.866154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.866173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.878888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.878902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.891920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.891936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.905030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.905052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.917688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.917704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.930442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.930457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.943424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.943439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.956032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.956047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.969145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.969166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.981831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.981845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:00.995301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:00.995316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.007935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.007951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.020880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.020895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.033623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.033638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.047184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.047198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.060841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.060856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.074525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.074540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.087084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.087099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.008 [2024-11-26 07:21:01.100055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.008 [2024-11-26 07:21:01.100070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.269 [2024-11-26 07:21:01.113096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.269 [2024-11-26 07:21:01.113112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.269 [2024-11-26 07:21:01.125978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.269 [2024-11-26 07:21:01.125994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.269 [2024-11-26 07:21:01.138656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.269 [2024-11-26 07:21:01.138672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.269 [2024-11-26 07:21:01.151051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.269 [2024-11-26 07:21:01.151067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.269 [2024-11-26 07:21:01.163659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.269 [2024-11-26 07:21:01.163675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.175998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.176014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.188819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.188835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.202327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.202343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.215115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.215130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.228431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.228447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.241570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.241585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.254744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.254760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.267917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.267932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.281299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.281315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.294481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.294496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.308165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.308180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.320914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.320929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.334182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.334198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.346682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.346697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.270 [2024-11-26 07:21:01.359323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.270 [2024-11-26 07:21:01.359339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.372356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.372372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.386037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.386053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.399561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.399576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.413311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.413326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.426631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.426646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.439984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.440000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.453544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.453559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.466281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.466296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.479538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.479553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.492166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.492181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.504605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.504621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.517171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.517186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.530363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.530378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.531 [2024-11-26 07:21:01.543846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.531 [2024-11-26 07:21:01.543862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.556506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.556521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.569682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.569698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.583328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.583343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.596803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.596820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.609293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.609309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.532 [2024-11-26 07:21:01.621715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.532 [2024-11-26 07:21:01.621731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.634560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.634575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.647919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.647934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.660510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.660526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.672614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.672630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.685852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.685868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.699321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.699336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.712319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.712335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.725107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.725122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 [2024-11-26 07:21:01.737223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.792 [2024-11-26 07:21:01.737238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.792 19253.50 IOPS, 150.42 MiB/s [2024-11-26T06:21:01.891Z] [2024-11-26 07:21:01.749987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.750002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.762525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.762540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.775808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.775824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.788746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.788762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.801990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.802006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.815227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.815243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.827948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.827964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.841402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.841416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.854114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.854129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.867254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.867268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.793 [2024-11-26 07:21:01.880347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.793 [2024-11-26 07:21:01.880365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.893519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.893535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.906192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.906208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.918732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.918748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.931384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.931400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.943644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.943660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.956846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.956861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.969456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.969471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.982155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.982174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:01.995277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:01.995292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.008748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.008763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.021505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.021520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.034892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.034907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.048226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.048242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.061216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.061231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.074033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.074048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.086789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.086804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.099221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.099236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.112075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.112090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.125614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.125633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.055 [2024-11-26 07:21:02.139097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.055 [2024-11-26 07:21:02.139112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.152092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.152107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.165590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.165605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.178885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.178900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.191782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.191797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.204330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.204345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.217332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.217347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.229822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.229837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.242694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.242709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.256073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.256088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.269082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.269097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.282397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.282412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.296032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.296047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.309088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.309103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.323032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.323047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.336247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.336262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.348736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.348751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.361850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.361865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.375622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.375644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.388445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.388461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.316 [2024-11-26 07:21:02.401409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.316 [2024-11-26 07:21:02.401424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.414652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.414667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.427355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.427370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.440727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.440741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.454208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.454222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.466623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.466638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.480064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.480079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.493480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.493495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.506754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.506769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.520081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.520096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.533230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.533246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.546274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.546289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.559774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.559789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.573260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.573274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.586518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.586534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.599593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.599607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.612994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.613009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.625754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.625773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.639137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.639151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.652055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.652070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.577 [2024-11-26 07:21:02.664907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.577 [2024-11-26 07:21:02.664922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 [2024-11-26 07:21:02.677731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.677747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 [2024-11-26 07:21:02.690705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.690720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 [2024-11-26 07:21:02.704104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.704118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 [2024-11-26 07:21:02.717410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.717425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 [2024-11-26 07:21:02.730805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.730821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 19264.00 IOPS, 150.50 MiB/s [2024-11-26T06:21:02.936Z] [2024-11-26 07:21:02.743258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.838 [2024-11-26 07:21:02.743273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.838 00:11:34.838 Latency(us) 00:11:34.838 [2024-11-26T06:21:02.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:34.838 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:34.838 Nvme1n1 : 5.01 19268.74 150.54 0.00 0.00 6636.81 2703.36 16820.91 00:11:34.838 [2024-11-26T06:21:02.936Z] =================================================================================================================== 00:11:34.838 [2024-11-26T06:21:02.937Z] Total : 19268.74 150.54 0.00 0.00 6636.81 2703.36 16820.91 00:11:34.839 [2024-11-26 07:21:02.752821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.752834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.764855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.764868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.776885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.776897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.788916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.788928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.800942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.800951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.812972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.812982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.825004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.825013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.837035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.837045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 [2024-11-26 07:21:02.849062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.839 [2024-11-26 07:21:02.849072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1304570) - No such process 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1304570 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.839 delay0 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.839 07:21:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:35.100 [2024-11-26 07:21:03.015732] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:43.242 Initializing NVMe Controllers 00:11:43.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:43.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:43.242 Initialization complete. Launching workers. 00:11:43.242 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 13669 00:11:43.242 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13881, failed to submit 82 00:11:43.242 success 13751, unsuccessful 130, failed 0 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.242 rmmod nvme_tcp 00:11:43.242 rmmod nvme_fabrics 00:11:43.242 rmmod nvme_keyring 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1302278 ']' 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1302278 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1302278 ']' 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1302278 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302278 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302278' 00:11:43.242 killing process with pid 1302278 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1302278 00:11:43.242 07:21:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1302278 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.242 07:21:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.185 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.185 00:11:44.185 real 0m34.037s 00:11:44.185 user 0m44.822s 00:11:44.185 sys 0m11.675s 00:11:44.185 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.185 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.185 ************************************ 00:11:44.185 END TEST nvmf_zcopy 00:11:44.185 ************************************ 00:11:44.186 07:21:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.186 07:21:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.186 07:21:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.186 07:21:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.186 ************************************ 00:11:44.186 START TEST nvmf_nmic 00:11:44.186 ************************************ 00:11:44.186 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.447 * Looking for test storage... 00:11:44.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.447 --rc genhtml_branch_coverage=1 00:11:44.447 --rc genhtml_function_coverage=1 00:11:44.447 --rc genhtml_legend=1 00:11:44.447 --rc geninfo_all_blocks=1 00:11:44.447 --rc geninfo_unexecuted_blocks=1 00:11:44.447 00:11:44.447 ' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.447 --rc genhtml_branch_coverage=1 00:11:44.447 --rc genhtml_function_coverage=1 00:11:44.447 --rc genhtml_legend=1 00:11:44.447 --rc geninfo_all_blocks=1 00:11:44.447 --rc geninfo_unexecuted_blocks=1 00:11:44.447 00:11:44.447 ' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.447 --rc genhtml_branch_coverage=1 00:11:44.447 --rc genhtml_function_coverage=1 00:11:44.447 --rc genhtml_legend=1 00:11:44.447 --rc geninfo_all_blocks=1 00:11:44.447 --rc geninfo_unexecuted_blocks=1 00:11:44.447 00:11:44.447 ' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.447 --rc genhtml_branch_coverage=1 00:11:44.447 --rc genhtml_function_coverage=1 00:11:44.447 --rc genhtml_legend=1 00:11:44.447 --rc geninfo_all_blocks=1 00:11:44.447 --rc geninfo_unexecuted_blocks=1 00:11:44.447 00:11:44.447 ' 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.447 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.448 07:21:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.676 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:11:52.677 00:11:52.677 --- 10.0.0.2 ping statistics --- 00:11:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.677 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:52.677 00:11:52.677 --- 10.0.0.1 ping statistics --- 00:11:52.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.677 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1311826 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1311826 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1311826 ']' 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.677 07:21:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.677 [2024-11-26 07:21:19.993094] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:11:52.677 [2024-11-26 07:21:19.993157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.677 [2024-11-26 07:21:20.097496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.677 [2024-11-26 07:21:20.154104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.677 [2024-11-26 07:21:20.154170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.677 [2024-11-26 07:21:20.154181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.677 [2024-11-26 07:21:20.154189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.677 [2024-11-26 07:21:20.154196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.677 [2024-11-26 07:21:20.156484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.677 [2024-11-26 07:21:20.156674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.677 [2024-11-26 07:21:20.156840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.677 [2024-11-26 07:21:20.156840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 [2024-11-26 07:21:20.880630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 Malloc0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 [2024-11-26 07:21:20.961737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:52.939 test case1: single bdev can't be used in multiple subsystems 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 [2024-11-26 07:21:20.997583] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:52.939 [2024-11-26 07:21:20.997611] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:52.939 [2024-11-26 07:21:20.997620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.939 request: 00:11:52.939 { 00:11:52.939 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:52.939 "namespace": { 00:11:52.939 "bdev_name": "Malloc0", 00:11:52.939 "no_auto_visible": false 00:11:52.939 }, 00:11:52.939 "method": "nvmf_subsystem_add_ns", 00:11:52.939 "req_id": 1 00:11:52.939 } 00:11:52.939 Got JSON-RPC error response 00:11:52.939 response: 00:11:52.939 { 00:11:52.939 "code": -32602, 00:11:52.939 "message": "Invalid parameters" 00:11:52.939 } 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:52.939 Adding namespace failed - expected result. 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:52.939 test case2: host connect to nvmf target in multiple paths 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 [2024-11-26 07:21:21.009770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.939 07:21:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.856 07:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:56.241 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.241 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:56.241 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.241 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:56.241 07:21:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:58.153 07:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:58.153 [global] 00:11:58.153 thread=1 00:11:58.153 invalidate=1 00:11:58.153 rw=write 00:11:58.153 time_based=1 00:11:58.153 runtime=1 00:11:58.153 ioengine=libaio 00:11:58.153 direct=1 00:11:58.153 bs=4096 00:11:58.153 iodepth=1 00:11:58.153 norandommap=0 00:11:58.153 numjobs=1 00:11:58.153 00:11:58.153 verify_dump=1 00:11:58.153 verify_backlog=512 00:11:58.153 verify_state_save=0 00:11:58.153 do_verify=1 00:11:58.153 verify=crc32c-intel 00:11:58.153 [job0] 00:11:58.153 filename=/dev/nvme0n1 00:11:58.153 Could not set queue depth (nvme0n1) 00:11:58.414 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:58.414 fio-3.35 00:11:58.414 Starting 1 thread 00:11:59.801 00:11:59.801 job0: (groupid=0, jobs=1): err= 0: pid=1313373: Tue Nov 26 07:21:27 2024 00:11:59.801 read: IOPS=501, BW=2006KiB/s (2054kB/s)(2068KiB/1031msec) 00:11:59.801 slat (nsec): min=6780, max=55861, avg=22545.94, stdev=7956.35 00:11:59.801 clat (usec): min=480, max=41676, avg=1153.49, stdev=3952.62 00:11:59.801 lat (usec): min=489, max=41702, avg=1176.04, stdev=3952.97 00:11:59.801 clat percentiles (usec): 00:11:59.801 | 1.00th=[ 510], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 693], 00:11:59.801 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 791], 00:11:59.801 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:11:59.801 | 99.00th=[ 979], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:59.801 | 99.99th=[41681] 00:11:59.801 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:11:59.801 slat (nsec): min=9302, max=52861, avg=21802.76, stdev=11466.98 00:11:59.801 clat (usec): min=107, max=683, avg=381.70, stdev=91.18 00:11:59.801 lat (usec): min=119, max=695, avg=403.50, stdev=98.00 00:11:59.801 clat percentiles (usec): 00:11:59.801 | 1.00th=[ 184], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 318], 00:11:59.801 | 30.00th=[ 343], 40.00th=[ 355], 50.00th=[ 383], 60.00th=[ 420], 00:11:59.801 | 70.00th=[ 437], 80.00th=[ 469], 90.00th=[ 486], 95.00th=[ 515], 00:11:59.801 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[ 685], 00:11:59.801 | 99.99th=[ 685] 00:11:59.801 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:11:59.801 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:59.801 lat (usec) : 250=9.02%, 500=52.63%, 750=15.31%, 1000=22.71% 00:11:59.801 lat (msec) : 50=0.32% 00:11:59.801 cpu : usr=1.84%, sys=3.30%, ctx=1541, majf=0, minf=1 00:11:59.801 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:59.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.801 issued rwts: total=517,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.801 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:59.801 00:11:59.801 Run status group 0 (all jobs): 00:11:59.801 READ: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2068KiB (2118kB), run=1031-1031msec 00:11:59.801 WRITE: bw=3973KiB/s (4068kB/s), 3973KiB/s-3973KiB/s (4068kB/s-4068kB/s), io=4096KiB (4194kB), run=1031-1031msec 00:11:59.801 00:11:59.801 Disk stats (read/write): 00:11:59.801 nvme0n1: ios=563/1024, merge=0/0, ticks=470/392, in_queue=862, util=93.09% 00:11:59.801 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:59.801 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.802 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.802 rmmod nvme_tcp 00:12:00.063 rmmod nvme_fabrics 00:12:00.063 rmmod nvme_keyring 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1311826 ']' 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1311826 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1311826 ']' 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1311826 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.063 07:21:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1311826 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1311826' 00:12:00.063 killing process with pid 1311826 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1311826 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1311826 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.063 07:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.782 00:12:02.782 real 0m17.991s 00:12:02.782 user 0m45.742s 00:12:02.782 sys 0m6.603s 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:02.782 ************************************ 00:12:02.782 END TEST nvmf_nmic 00:12:02.782 ************************************ 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.782 ************************************ 00:12:02.782 START TEST nvmf_fio_target 00:12:02.782 ************************************ 00:12:02.782 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:02.782 * Looking for test storage... 00:12:02.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.783 --rc genhtml_branch_coverage=1 00:12:02.783 --rc genhtml_function_coverage=1 00:12:02.783 --rc genhtml_legend=1 00:12:02.783 --rc geninfo_all_blocks=1 00:12:02.783 --rc geninfo_unexecuted_blocks=1 00:12:02.783 00:12:02.783 ' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.783 --rc genhtml_branch_coverage=1 00:12:02.783 --rc genhtml_function_coverage=1 00:12:02.783 --rc genhtml_legend=1 00:12:02.783 --rc geninfo_all_blocks=1 00:12:02.783 --rc geninfo_unexecuted_blocks=1 00:12:02.783 00:12:02.783 ' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.783 --rc genhtml_branch_coverage=1 00:12:02.783 --rc genhtml_function_coverage=1 00:12:02.783 --rc genhtml_legend=1 00:12:02.783 --rc geninfo_all_blocks=1 00:12:02.783 --rc geninfo_unexecuted_blocks=1 00:12:02.783 00:12:02.783 ' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.783 --rc genhtml_branch_coverage=1 00:12:02.783 --rc genhtml_function_coverage=1 00:12:02.783 --rc genhtml_legend=1 00:12:02.783 --rc geninfo_all_blocks=1 00:12:02.783 --rc geninfo_unexecuted_blocks=1 00:12:02.783 00:12:02.783 ' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.783 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.784 07:21:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:10.930 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:10.930 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.930 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:10.931 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:10.931 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:12:10.931 00:12:10.931 --- 10.0.0.2 ping statistics --- 00:12:10.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.931 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:12:10.931 00:12:10.931 --- 10.0.0.1 ping statistics --- 00:12:10.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.931 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.931 07:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1317863 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1317863 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1317863 ']' 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.931 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.931 [2024-11-26 07:21:38.095744] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:12:10.931 [2024-11-26 07:21:38.095809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.931 [2024-11-26 07:21:38.197129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.931 [2024-11-26 07:21:38.251809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.931 [2024-11-26 07:21:38.251865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.931 [2024-11-26 07:21:38.251874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.931 [2024-11-26 07:21:38.251881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.932 [2024-11-26 07:21:38.251887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.932 [2024-11-26 07:21:38.253915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.932 [2024-11-26 07:21:38.254075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.932 [2024-11-26 07:21:38.254237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.932 [2024-11-26 07:21:38.254237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.932 07:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:11.193 [2024-11-26 07:21:39.131561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.193 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.455 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:11.455 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.716 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:11.716 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.977 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:11.978 07:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:11.978 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:11.978 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:12.239 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:12.499 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:12.500 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:12.761 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:12.761 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:12.761 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:12.761 07:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:13.022 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.283 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:13.283 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:13.543 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:13.543 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.543 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.804 [2024-11-26 07:21:41.707807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.804 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:14.064 07:21:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:14.064 07:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:15.978 07:21:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:17.911 07:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:17.911 [global] 00:12:17.911 thread=1 00:12:17.911 invalidate=1 00:12:17.911 rw=write 00:12:17.911 time_based=1 00:12:17.911 runtime=1 00:12:17.911 ioengine=libaio 00:12:17.911 direct=1 00:12:17.911 bs=4096 00:12:17.911 iodepth=1 00:12:17.911 norandommap=0 00:12:17.911 numjobs=1 00:12:17.911 00:12:17.911 verify_dump=1 00:12:17.911 verify_backlog=512 00:12:17.911 verify_state_save=0 00:12:17.911 do_verify=1 00:12:17.911 verify=crc32c-intel 00:12:17.911 [job0] 00:12:17.911 filename=/dev/nvme0n1 00:12:17.911 [job1] 00:12:17.911 filename=/dev/nvme0n2 00:12:17.911 [job2] 00:12:17.911 filename=/dev/nvme0n3 00:12:17.911 [job3] 00:12:17.911 filename=/dev/nvme0n4 00:12:17.911 Could not set queue depth (nvme0n1) 00:12:17.911 Could not set queue depth (nvme0n2) 00:12:17.911 Could not set queue depth (nvme0n3) 00:12:17.912 Could not set queue depth (nvme0n4) 00:12:18.172 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.172 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.172 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.172 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:18.172 fio-3.35 00:12:18.172 Starting 4 threads 00:12:19.559 00:12:19.559 job0: (groupid=0, jobs=1): err= 0: pid=1319646: Tue Nov 26 07:21:47 2024 00:12:19.559 read: IOPS=16, BW=65.9KiB/s (67.5kB/s)(68.0KiB/1032msec) 00:12:19.559 slat (nsec): min=27219, max=28522, avg=27730.12, stdev=361.22 00:12:19.559 clat (usec): min=41069, max=42979, avg=42088.53, stdev=468.78 00:12:19.559 lat (usec): min=41097, max=43006, avg=42116.26, stdev=468.72 00:12:19.559 clat percentiles (usec): 00:12:19.559 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:12:19.559 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:19.559 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:12:19.559 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:19.559 | 99.99th=[42730] 00:12:19.559 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:12:19.559 slat (nsec): min=9387, max=57199, avg=32001.56, stdev=10776.74 00:12:19.559 clat (usec): min=132, max=1490, avg=576.76, stdev=148.54 00:12:19.559 lat (usec): min=142, max=1527, avg=608.77, stdev=153.12 00:12:19.559 clat percentiles (usec): 00:12:19.559 | 1.00th=[ 227], 5.00th=[ 318], 10.00th=[ 375], 20.00th=[ 461], 00:12:19.559 | 30.00th=[ 506], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:12:19.559 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 791], 00:12:19.559 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 1483], 99.95th=[ 1483], 00:12:19.559 | 99.99th=[ 1483] 00:12:19.559 bw ( KiB/s): min= 4096, max= 4096, per=35.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:19.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:19.559 lat (usec) : 250=1.70%, 500=26.65%, 750=59.36%, 1000=8.70% 00:12:19.559 lat (msec) : 2=0.38%, 50=3.21% 00:12:19.559 cpu : usr=1.55%, sys=1.45%, ctx=532, majf=0, minf=1 00:12:19.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.559 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.560 job1: (groupid=0, jobs=1): err= 0: pid=1319647: Tue Nov 26 07:21:47 2024 00:12:19.560 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:19.560 slat (nsec): min=6679, max=64671, avg=26903.76, stdev=7552.24 00:12:19.560 clat (usec): min=403, max=1005, avg=738.29, stdev=115.93 00:12:19.560 lat (usec): min=432, max=1034, avg=765.19, stdev=116.49 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[ 449], 5.00th=[ 545], 10.00th=[ 578], 20.00th=[ 635], 00:12:19.560 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 783], 00:12:19.560 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 881], 95.00th=[ 906], 00:12:19.560 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1004], 00:12:19.560 | 99.99th=[ 1004] 00:12:19.560 write: IOPS=1012, BW=4052KiB/s (4149kB/s)(4056KiB/1001msec); 0 zone resets 00:12:19.560 slat (usec): min=9, max=42114, avg=75.17, stdev=1321.55 00:12:19.560 clat (usec): min=111, max=986, avg=513.72, stdev=132.30 00:12:19.560 lat (usec): min=125, max=42600, avg=588.89, stdev=1327.60 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[ 202], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 392], 00:12:19.560 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 553], 00:12:19.560 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 685], 95.00th=[ 725], 00:12:19.560 | 99.00th=[ 807], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 988], 00:12:19.560 | 99.99th=[ 988] 00:12:19.560 bw ( KiB/s): min= 4096, max= 4096, per=35.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:19.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:19.560 lat (usec) : 250=1.44%, 500=30.60%, 750=49.48%, 1000=18.41% 00:12:19.560 lat (msec) : 2=0.07% 00:12:19.560 cpu : usr=3.50%, sys=5.80%, ctx=1528, majf=0, minf=1 00:12:19.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 issued rwts: total=512,1014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.560 job2: (groupid=0, jobs=1): err= 0: pid=1319648: Tue Nov 26 07:21:47 2024 00:12:19.560 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:12:19.560 slat (nsec): min=27890, max=28996, avg=28332.12, stdev=287.19 00:12:19.560 clat (usec): min=41784, max=42986, avg=42127.24, stdev=372.22 00:12:19.560 lat (usec): min=41812, max=43014, avg=42155.57, stdev=372.40 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:19.560 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:19.560 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:12:19.560 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:19.560 | 99.99th=[42730] 00:12:19.560 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:12:19.560 slat (nsec): min=9806, max=62972, avg=31371.02, stdev=12221.98 00:12:19.560 clat (usec): min=199, max=996, avg=557.20, stdev=156.54 00:12:19.560 lat (usec): min=215, max=1033, avg=588.57, stdev=161.34 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[ 237], 5.00th=[ 289], 10.00th=[ 351], 20.00th=[ 420], 00:12:19.560 | 30.00th=[ 469], 40.00th=[ 519], 50.00th=[ 562], 60.00th=[ 603], 00:12:19.560 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 807], 00:12:19.560 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 996], 99.95th=[ 996], 00:12:19.560 | 99.99th=[ 996] 00:12:19.560 bw ( KiB/s): min= 4096, max= 4096, per=35.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:19.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:19.560 lat (usec) : 250=1.89%, 500=34.78%, 750=50.66%, 1000=9.45% 00:12:19.560 lat (msec) : 50=3.21% 00:12:19.560 cpu : usr=0.69%, sys=2.25%, ctx=530, majf=0, minf=1 00:12:19.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.560 job3: (groupid=0, jobs=1): err= 0: pid=1319649: Tue Nov 26 07:21:47 2024 00:12:19.560 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:19.560 slat (nsec): min=5298, max=38929, avg=8713.57, stdev=5517.14 00:12:19.560 clat (usec): min=503, max=1244, avg=891.98, stdev=140.33 00:12:19.560 lat (usec): min=510, max=1271, avg=900.70, stdev=143.55 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[ 594], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 758], 00:12:19.560 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 930], 00:12:19.560 | 70.00th=[ 947], 80.00th=[ 979], 90.00th=[ 1090], 95.00th=[ 1139], 00:12:19.560 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:19.560 | 99.99th=[ 1237] 00:12:19.560 write: IOPS=974, BW=3896KiB/s (3990kB/s)(3900KiB/1001msec); 0 zone resets 00:12:19.560 slat (nsec): min=5760, max=70507, avg=21112.61, stdev=13910.30 00:12:19.560 clat (usec): min=216, max=868, avg=526.27, stdev=112.38 00:12:19.560 lat (usec): min=223, max=904, avg=547.38, stdev=120.88 00:12:19.560 clat percentiles (usec): 00:12:19.560 | 1.00th=[ 245], 5.00th=[ 351], 10.00th=[ 375], 20.00th=[ 437], 00:12:19.560 | 30.00th=[ 469], 40.00th=[ 494], 50.00th=[ 529], 60.00th=[ 553], 00:12:19.560 | 70.00th=[ 586], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 717], 00:12:19.560 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 873], 99.95th=[ 873], 00:12:19.560 | 99.99th=[ 873] 00:12:19.560 bw ( KiB/s): min= 4096, max= 4096, per=35.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:19.560 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:19.560 lat (usec) : 250=0.94%, 500=26.97%, 750=42.84%, 1000=23.54% 00:12:19.560 lat (msec) : 2=5.72% 00:12:19.560 cpu : usr=1.50%, sys=3.30%, ctx=1488, majf=0, minf=1 00:12:19.560 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:19.560 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.560 issued rwts: total=512,975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.560 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:19.560 00:12:19.560 Run status group 0 (all jobs): 00:12:19.560 READ: bw=4101KiB/s (4199kB/s), 65.9KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1032msec 00:12:19.560 WRITE: bw=11.4MiB/s (12.0MB/s), 1984KiB/s-4052KiB/s (2032kB/s-4149kB/s), io=11.8MiB (12.3MB), run=1001-1032msec 00:12:19.560 00:12:19.560 Disk stats (read/write): 00:12:19.560 nvme0n1: ios=34/512, merge=0/0, ticks=1348/227, in_queue=1575, util=83.97% 00:12:19.560 nvme0n2: ios=564/690, merge=0/0, ticks=532/304, in_queue=836, util=88.46% 00:12:19.560 nvme0n3: ios=66/512, merge=0/0, ticks=617/240, in_queue=857, util=94.81% 00:12:19.560 nvme0n4: ios=534/705, merge=0/0, ticks=1324/326, in_queue=1650, util=94.32% 00:12:19.560 07:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:19.560 [global] 00:12:19.560 thread=1 00:12:19.560 invalidate=1 00:12:19.560 rw=randwrite 00:12:19.560 time_based=1 00:12:19.560 runtime=1 00:12:19.560 ioengine=libaio 00:12:19.560 direct=1 00:12:19.560 bs=4096 00:12:19.560 iodepth=1 00:12:19.560 norandommap=0 00:12:19.560 numjobs=1 00:12:19.560 00:12:19.560 verify_dump=1 00:12:19.560 verify_backlog=512 00:12:19.560 verify_state_save=0 00:12:19.560 do_verify=1 00:12:19.560 verify=crc32c-intel 00:12:19.560 [job0] 00:12:19.560 filename=/dev/nvme0n1 00:12:19.560 [job1] 00:12:19.560 filename=/dev/nvme0n2 00:12:19.560 [job2] 00:12:19.560 filename=/dev/nvme0n3 00:12:19.560 [job3] 00:12:19.560 filename=/dev/nvme0n4 00:12:19.560 Could not set queue depth (nvme0n1) 00:12:19.560 Could not set queue depth (nvme0n2) 00:12:19.560 Could not set queue depth (nvme0n3) 00:12:19.560 Could not set queue depth (nvme0n4) 00:12:19.834 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.834 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.834 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.834 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:19.834 fio-3.35 00:12:19.834 Starting 4 threads 00:12:21.221 00:12:21.221 job0: (groupid=0, jobs=1): err= 0: pid=1320176: Tue Nov 26 07:21:49 2024 00:12:21.221 read: IOPS=100, BW=402KiB/s (412kB/s)(416KiB/1035msec) 00:12:21.221 slat (nsec): min=6595, max=45755, avg=25219.93, stdev=6690.93 00:12:21.221 clat (usec): min=343, max=43076, avg=7793.17, stdev=15590.59 00:12:21.221 lat (usec): min=350, max=43107, avg=7818.39, stdev=15591.46 00:12:21.221 clat percentiles (usec): 00:12:21.221 | 1.00th=[ 388], 5.00th=[ 529], 10.00th=[ 562], 20.00th=[ 619], 00:12:21.221 | 30.00th=[ 660], 40.00th=[ 693], 50.00th=[ 725], 60.00th=[ 766], 00:12:21.221 | 70.00th=[ 816], 80.00th=[ 922], 90.00th=[41681], 95.00th=[42206], 00:12:21.221 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:21.221 | 99.99th=[43254] 00:12:21.221 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:12:21.221 slat (nsec): min=8803, max=68868, avg=28452.86, stdev=9719.10 00:12:21.221 clat (usec): min=120, max=844, avg=394.67, stdev=125.17 00:12:21.221 lat (usec): min=130, max=896, avg=423.12, stdev=127.73 00:12:21.221 clat percentiles (usec): 00:12:21.221 | 1.00th=[ 137], 5.00th=[ 217], 10.00th=[ 260], 20.00th=[ 285], 00:12:21.221 | 30.00th=[ 302], 40.00th=[ 338], 50.00th=[ 392], 60.00th=[ 416], 00:12:21.221 | 70.00th=[ 461], 80.00th=[ 506], 90.00th=[ 570], 95.00th=[ 611], 00:12:21.221 | 99.00th=[ 701], 99.50th=[ 775], 99.90th=[ 848], 99.95th=[ 848], 00:12:21.221 | 99.99th=[ 848] 00:12:21.221 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:12:21.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:21.221 lat (usec) : 250=6.82%, 500=59.90%, 750=25.32%, 1000=5.03% 00:12:21.221 lat (msec) : 50=2.92% 00:12:21.221 cpu : usr=1.55%, sys=1.84%, ctx=617, majf=0, minf=1 00:12:21.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.221 issued rwts: total=104,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.221 job1: (groupid=0, jobs=1): err= 0: pid=1320177: Tue Nov 26 07:21:49 2024 00:12:21.221 read: IOPS=143, BW=575KiB/s (589kB/s)(576KiB/1001msec) 00:12:21.221 slat (nsec): min=5571, max=59927, avg=17427.12, stdev=9599.03 00:12:21.221 clat (usec): min=666, max=42492, avg=5267.73, stdev=12522.41 00:12:21.221 lat (usec): min=674, max=42498, avg=5285.16, stdev=12522.63 00:12:21.221 clat percentiles (usec): 00:12:21.221 | 1.00th=[ 783], 5.00th=[ 840], 10.00th=[ 857], 20.00th=[ 898], 00:12:21.221 | 30.00th=[ 938], 40.00th=[ 1012], 50.00th=[ 1057], 60.00th=[ 1090], 00:12:21.221 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[41681], 00:12:21.221 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:21.221 | 99.99th=[42730] 00:12:21.221 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:21.221 slat (nsec): min=5253, max=29839, avg=6970.63, stdev=1532.69 00:12:21.221 clat (usec): min=117, max=726, avg=457.93, stdev=114.57 00:12:21.221 lat (usec): min=123, max=733, avg=464.91, stdev=114.74 00:12:21.221 clat percentiles (usec): 00:12:21.221 | 1.00th=[ 157], 5.00th=[ 235], 10.00th=[ 302], 20.00th=[ 375], 00:12:21.221 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 465], 60.00th=[ 490], 00:12:21.221 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 635], 00:12:21.221 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 725], 99.95th=[ 725], 00:12:21.221 | 99.99th=[ 725] 00:12:21.221 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:12:21.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:21.221 lat (usec) : 250=4.12%, 500=44.97%, 750=29.12%, 1000=8.54% 00:12:21.221 lat (msec) : 2=10.98%, 50=2.29% 00:12:21.221 cpu : usr=0.20%, sys=1.00%, ctx=656, majf=0, minf=1 00:12:21.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.221 issued rwts: total=144,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.221 job2: (groupid=0, jobs=1): err= 0: pid=1320178: Tue Nov 26 07:21:49 2024 00:12:21.221 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1028msec) 00:12:21.221 slat (nsec): min=25388, max=25894, avg=25592.72, stdev=164.06 00:12:21.221 clat (usec): min=848, max=42002, avg=39502.41, stdev=9654.04 00:12:21.221 lat (usec): min=874, max=42028, avg=39528.00, stdev=9654.00 00:12:21.222 clat percentiles (usec): 00:12:21.222 | 1.00th=[ 848], 5.00th=[ 848], 10.00th=[41157], 20.00th=[41157], 00:12:21.222 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:21.222 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:21.222 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:21.222 | 99.99th=[42206] 00:12:21.222 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:12:21.222 slat (nsec): min=9491, max=64820, avg=28557.26, stdev=9011.01 00:12:21.222 clat (usec): min=275, max=858, avg=582.52, stdev=111.98 00:12:21.222 lat (usec): min=302, max=893, avg=611.07, stdev=114.73 00:12:21.222 clat percentiles (usec): 00:12:21.222 | 1.00th=[ 343], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 478], 00:12:21.222 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:12:21.222 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 717], 95.00th=[ 750], 00:12:21.222 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 857], 99.95th=[ 857], 00:12:21.222 | 99.99th=[ 857] 00:12:21.222 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:12:21.222 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:21.222 lat (usec) : 500=24.53%, 750=67.17%, 1000=5.09% 00:12:21.222 lat (msec) : 50=3.21% 00:12:21.222 cpu : usr=0.49%, sys=1.75%, ctx=530, majf=0, minf=1 00:12:21.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.222 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.222 job3: (groupid=0, jobs=1): err= 0: pid=1320179: Tue Nov 26 07:21:49 2024 00:12:21.222 read: IOPS=124, BW=500KiB/s (511kB/s)(500KiB/1001msec) 00:12:21.222 slat (nsec): min=6946, max=59801, avg=26040.62, stdev=6247.70 00:12:21.222 clat (usec): min=600, max=42407, avg=5956.72, stdev=13348.97 00:12:21.222 lat (usec): min=608, max=42454, avg=5982.76, stdev=13349.44 00:12:21.222 clat percentiles (usec): 00:12:21.222 | 1.00th=[ 652], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 996], 00:12:21.222 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:12:21.222 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[41681], 95.00th=[42206], 00:12:21.222 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:21.222 | 99.99th=[42206] 00:12:21.222 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:21.222 slat (nsec): min=9805, max=50986, avg=29661.87, stdev=8170.98 00:12:21.222 clat (usec): min=125, max=800, avg=454.63, stdev=133.17 00:12:21.222 lat (usec): min=136, max=830, avg=484.30, stdev=135.56 00:12:21.222 clat percentiles (usec): 00:12:21.222 | 1.00th=[ 155], 5.00th=[ 260], 10.00th=[ 285], 20.00th=[ 330], 00:12:21.222 | 30.00th=[ 383], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 490], 00:12:21.222 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 635], 95.00th=[ 676], 00:12:21.222 | 99.00th=[ 750], 99.50th=[ 799], 99.90th=[ 799], 99.95th=[ 799], 00:12:21.222 | 99.99th=[ 799] 00:12:21.222 bw ( KiB/s): min= 4096, max= 4096, per=51.75%, avg=4096.00, stdev= 0.00, samples=1 00:12:21.222 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:21.222 lat (usec) : 250=3.30%, 500=47.41%, 750=29.67%, 1000=3.92% 00:12:21.222 lat (msec) : 2=13.34%, 50=2.35% 00:12:21.222 cpu : usr=0.80%, sys=1.90%, ctx=637, majf=0, minf=1 00:12:21.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.222 issued rwts: total=125,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:21.222 00:12:21.222 Run status group 0 (all jobs): 00:12:21.222 READ: bw=1511KiB/s (1547kB/s), 70.0KiB/s-575KiB/s (71.7kB/s-589kB/s), io=1564KiB (1602kB), run=1001-1035msec 00:12:21.222 WRITE: bw=7915KiB/s (8105kB/s), 1979KiB/s-2046KiB/s (2026kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1035msec 00:12:21.222 00:12:21.222 Disk stats (read/write): 00:12:21.222 nvme0n1: ios=153/512, merge=0/0, ticks=810/157, in_queue=967, util=85.97% 00:12:21.222 nvme0n2: ios=154/512, merge=0/0, ticks=630/223, in_queue=853, util=89.22% 00:12:21.222 nvme0n3: ios=12/512, merge=0/0, ticks=460/287, in_queue=747, util=86.71% 00:12:21.222 nvme0n4: ios=110/512, merge=0/0, ticks=627/211, in_queue=838, util=94.67% 00:12:21.222 07:21:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:21.222 [global] 00:12:21.222 thread=1 00:12:21.222 invalidate=1 00:12:21.222 rw=write 00:12:21.222 time_based=1 00:12:21.222 runtime=1 00:12:21.222 ioengine=libaio 00:12:21.222 direct=1 00:12:21.222 bs=4096 00:12:21.222 iodepth=128 00:12:21.222 norandommap=0 00:12:21.222 numjobs=1 00:12:21.222 00:12:21.222 verify_dump=1 00:12:21.222 verify_backlog=512 00:12:21.222 verify_state_save=0 00:12:21.222 do_verify=1 00:12:21.222 verify=crc32c-intel 00:12:21.222 [job0] 00:12:21.222 filename=/dev/nvme0n1 00:12:21.222 [job1] 00:12:21.222 filename=/dev/nvme0n2 00:12:21.222 [job2] 00:12:21.222 filename=/dev/nvme0n3 00:12:21.222 [job3] 00:12:21.222 filename=/dev/nvme0n4 00:12:21.222 Could not set queue depth (nvme0n1) 00:12:21.222 Could not set queue depth (nvme0n2) 00:12:21.222 Could not set queue depth (nvme0n3) 00:12:21.222 Could not set queue depth (nvme0n4) 00:12:21.482 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.482 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.482 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.482 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.482 fio-3.35 00:12:21.482 Starting 4 threads 00:12:22.870 00:12:22.870 job0: (groupid=0, jobs=1): err= 0: pid=1320702: Tue Nov 26 07:21:50 2024 00:12:22.870 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:12:22.870 slat (nsec): min=961, max=11423k, avg=93079.02, stdev=682719.44 00:12:22.870 clat (usec): min=3362, max=63094, avg=10624.82, stdev=5429.43 00:12:22.870 lat (usec): min=3370, max=63102, avg=10717.90, stdev=5512.46 00:12:22.870 clat percentiles (usec): 00:12:22.870 | 1.00th=[ 5014], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8094], 00:12:22.870 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9896], 00:12:22.870 | 70.00th=[10683], 80.00th=[12256], 90.00th=[15008], 95.00th=[18482], 00:12:22.870 | 99.00th=[39060], 99.50th=[50070], 99.90th=[63177], 99.95th=[63177], 00:12:22.871 | 99.99th=[63177] 00:12:22.871 write: IOPS=4420, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1006msec); 0 zone resets 00:12:22.871 slat (nsec): min=1731, max=10255k, avg=134165.06, stdev=737134.55 00:12:22.871 clat (usec): min=1141, max=93975, avg=18902.38, stdev=19818.60 00:12:22.871 lat (usec): min=1243, max=93990, avg=19036.55, stdev=19946.67 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 3294], 5.00th=[ 4490], 10.00th=[ 5276], 20.00th=[ 6980], 00:12:22.871 | 30.00th=[ 7898], 40.00th=[ 9765], 50.00th=[12780], 60.00th=[13435], 00:12:22.871 | 70.00th=[15533], 80.00th=[22938], 90.00th=[47449], 95.00th=[73925], 00:12:22.871 | 99.00th=[87557], 99.50th=[88605], 99.90th=[93848], 99.95th=[93848], 00:12:22.871 | 99.99th=[93848] 00:12:22.871 bw ( KiB/s): min=14072, max=20480, per=17.18%, avg=17276.00, stdev=4531.14, samples=2 00:12:22.871 iops : min= 3518, max= 5120, avg=4319.00, stdev=1132.79, samples=2 00:12:22.871 lat (msec) : 2=0.11%, 4=1.29%, 10=49.50%, 20=36.30%, 50=7.78% 00:12:22.871 lat (msec) : 100=5.02% 00:12:22.871 cpu : usr=2.49%, sys=5.27%, ctx=446, majf=0, minf=1 00:12:22.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:22.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.871 issued rwts: total=4096,4447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.871 job1: (groupid=0, jobs=1): err= 0: pid=1320706: Tue Nov 26 07:21:50 2024 00:12:22.871 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:12:22.871 slat (nsec): min=980, max=10680k, avg=62965.71, stdev=463198.05 00:12:22.871 clat (usec): min=2043, max=25609, avg=8362.46, stdev=2541.83 00:12:22.871 lat (usec): min=2297, max=25617, avg=8425.43, stdev=2572.27 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 6783], 00:12:22.871 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8094], 00:12:22.871 | 70.00th=[ 8356], 80.00th=[10028], 90.00th=[11469], 95.00th=[13042], 00:12:22.871 | 99.00th=[18220], 99.50th=[21627], 99.90th=[24773], 99.95th=[24773], 00:12:22.871 | 99.99th=[25560] 00:12:22.871 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:12:22.871 slat (nsec): min=1699, max=13291k, avg=68532.99, stdev=551546.21 00:12:22.871 clat (usec): min=518, max=56001, avg=9440.33, stdev=7153.42 00:12:22.871 lat (usec): min=866, max=56003, avg=9508.87, stdev=7209.66 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 3097], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 5800], 00:12:22.871 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7635], 00:12:22.871 | 70.00th=[ 8455], 80.00th=[10421], 90.00th=[13960], 95.00th=[29230], 00:12:22.871 | 99.00th=[40633], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:12:22.871 | 99.99th=[55837] 00:12:22.871 bw ( KiB/s): min=24560, max=32784, per=28.51%, avg=28672.00, stdev=5815.25, samples=2 00:12:22.871 iops : min= 6140, max= 8196, avg=7168.00, stdev=1453.81, samples=2 00:12:22.871 lat (usec) : 750=0.01%, 1000=0.06% 00:12:22.871 lat (msec) : 2=0.01%, 4=1.82%, 10=77.84%, 20=16.63%, 50=3.62% 00:12:22.871 lat (msec) : 100=0.01% 00:12:22.871 cpu : usr=5.87%, sys=7.06%, ctx=491, majf=0, minf=3 00:12:22.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:22.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.871 issued rwts: total=7161,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.871 job2: (groupid=0, jobs=1): err= 0: pid=1320707: Tue Nov 26 07:21:50 2024 00:12:22.871 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:12:22.871 slat (nsec): min=947, max=14189k, avg=79369.42, stdev=568141.96 00:12:22.871 clat (usec): min=5167, max=34308, avg=10055.37, stdev=3448.97 00:12:22.871 lat (usec): min=5524, max=34323, avg=10134.74, stdev=3492.15 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7898], 20.00th=[ 8455], 00:12:22.871 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9503], 00:12:22.871 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[13435], 95.00th=[17695], 00:12:22.871 | 99.00th=[24773], 99.50th=[29754], 99.90th=[30278], 99.95th=[30278], 00:12:22.871 | 99.99th=[34341] 00:12:22.871 write: IOPS=6942, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1003msec); 0 zone resets 00:12:22.871 slat (nsec): min=1655, max=6515.5k, avg=62607.45, stdev=298592.06 00:12:22.871 clat (usec): min=678, max=18914, avg=8653.08, stdev=1652.74 00:12:22.871 lat (usec): min=1327, max=18921, avg=8715.68, stdev=1662.82 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 4948], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7898], 00:12:22.871 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8586], 00:12:22.871 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[11338], 00:12:22.871 | 99.00th=[15008], 99.50th=[16581], 99.90th=[18220], 99.95th=[19006], 00:12:22.871 | 99.99th=[19006] 00:12:22.871 bw ( KiB/s): min=26744, max=27936, per=27.18%, avg=27340.00, stdev=842.87, samples=2 00:12:22.871 iops : min= 6686, max= 6984, avg=6835.00, stdev=210.72, samples=2 00:12:22.871 lat (usec) : 750=0.01% 00:12:22.871 lat (msec) : 2=0.01%, 10=80.95%, 20=17.63%, 50=1.40% 00:12:22.871 cpu : usr=3.79%, sys=6.69%, ctx=799, majf=0, minf=1 00:12:22.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:22.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.871 issued rwts: total=6656,6963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.871 job3: (groupid=0, jobs=1): err= 0: pid=1320708: Tue Nov 26 07:21:50 2024 00:12:22.871 read: IOPS=6468, BW=25.3MiB/s (26.5MB/s)(25.5MiB/1008msec) 00:12:22.871 slat (nsec): min=970, max=16318k, avg=76510.01, stdev=599744.86 00:12:22.871 clat (usec): min=977, max=42715, avg=10397.53, stdev=4224.49 00:12:22.871 lat (usec): min=3048, max=42717, avg=10474.04, stdev=4259.67 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 5014], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 8356], 00:12:22.871 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9765], 00:12:22.871 | 70.00th=[10421], 80.00th=[12125], 90.00th=[14091], 95.00th=[16712], 00:12:22.871 | 99.00th=[28443], 99.50th=[32900], 99.90th=[41157], 99.95th=[42730], 00:12:22.871 | 99.99th=[42730] 00:12:22.871 write: IOPS=6713, BW=26.2MiB/s (27.5MB/s)(26.4MiB/1008msec); 0 zone resets 00:12:22.871 slat (nsec): min=1747, max=12545k, avg=55537.02, stdev=421998.42 00:12:22.871 clat (usec): min=976, max=45986, avg=8877.27, stdev=5520.46 00:12:22.871 lat (usec): min=985, max=45996, avg=8932.81, stdev=5543.29 00:12:22.871 clat percentiles (usec): 00:12:22.871 | 1.00th=[ 1663], 5.00th=[ 3130], 10.00th=[ 4490], 20.00th=[ 5342], 00:12:22.871 | 30.00th=[ 6587], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8586], 00:12:22.871 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[14484], 95.00th=[15926], 00:12:22.871 | 99.00th=[39584], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:12:22.871 | 99.99th=[45876] 00:12:22.871 bw ( KiB/s): min=26248, max=27888, per=26.91%, avg=27068.00, stdev=1159.66, samples=2 00:12:22.871 iops : min= 6562, max= 6972, avg=6767.00, stdev=289.91, samples=2 00:12:22.871 lat (usec) : 1000=0.02% 00:12:22.871 lat (msec) : 2=0.90%, 4=3.24%, 10=67.34%, 20=25.24%, 50=3.25% 00:12:22.871 cpu : usr=5.26%, sys=7.94%, ctx=494, majf=0, minf=1 00:12:22.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:22.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:22.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:22.871 issued rwts: total=6520,6767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:22.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:22.871 00:12:22.871 Run status group 0 (all jobs): 00:12:22.871 READ: bw=94.7MiB/s (99.3MB/s), 15.9MiB/s-27.8MiB/s (16.7MB/s-29.2MB/s), io=95.4MiB (100MB), run=1003-1008msec 00:12:22.871 WRITE: bw=98.2MiB/s (103MB/s), 17.3MiB/s-27.8MiB/s (18.1MB/s-29.2MB/s), io=99.0MiB (104MB), run=1003-1008msec 00:12:22.871 00:12:22.871 Disk stats (read/write): 00:12:22.871 nvme0n1: ios=3606/3887, merge=0/0, ticks=33264/62655, in_queue=95919, util=84.17% 00:12:22.871 nvme0n2: ios=5654/5727, merge=0/0, ticks=42767/50560, in_queue=93327, util=88.07% 00:12:22.871 nvme0n3: ios=5646/5632, merge=0/0, ticks=30734/23450, in_queue=54184, util=92.31% 00:12:22.871 nvme0n4: ios=5243/5632, merge=0/0, ticks=51213/47775, in_queue=98988, util=94.14% 00:12:22.871 07:21:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:22.871 [global] 00:12:22.871 thread=1 00:12:22.871 invalidate=1 00:12:22.871 rw=randwrite 00:12:22.871 time_based=1 00:12:22.871 runtime=1 00:12:22.871 ioengine=libaio 00:12:22.871 direct=1 00:12:22.871 bs=4096 00:12:22.871 iodepth=128 00:12:22.871 norandommap=0 00:12:22.871 numjobs=1 00:12:22.871 00:12:22.871 verify_dump=1 00:12:22.871 verify_backlog=512 00:12:22.871 verify_state_save=0 00:12:22.871 do_verify=1 00:12:22.871 verify=crc32c-intel 00:12:22.871 [job0] 00:12:22.871 filename=/dev/nvme0n1 00:12:22.871 [job1] 00:12:22.871 filename=/dev/nvme0n2 00:12:22.871 [job2] 00:12:22.871 filename=/dev/nvme0n3 00:12:22.871 [job3] 00:12:22.871 filename=/dev/nvme0n4 00:12:22.871 Could not set queue depth (nvme0n1) 00:12:22.871 Could not set queue depth (nvme0n2) 00:12:22.871 Could not set queue depth (nvme0n3) 00:12:22.871 Could not set queue depth (nvme0n4) 00:12:23.132 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.132 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.132 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.132 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:23.132 fio-3.35 00:12:23.132 Starting 4 threads 00:12:24.518 00:12:24.518 job0: (groupid=0, jobs=1): err= 0: pid=1321229: Tue Nov 26 07:21:52 2024 00:12:24.518 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:12:24.518 slat (nsec): min=914, max=23159k, avg=97352.27, stdev=643205.84 00:12:24.518 clat (usec): min=3610, max=55783, avg=12497.75, stdev=8278.76 00:12:24.518 lat (usec): min=3617, max=55790, avg=12595.10, stdev=8316.80 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7570], 00:12:24.518 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[12387], 00:12:24.518 | 70.00th=[14091], 80.00th=[15270], 90.00th=[20579], 95.00th=[24249], 00:12:24.518 | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:12:24.518 | 99.99th=[55837] 00:12:24.518 write: IOPS=5645, BW=22.1MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:12:24.518 slat (nsec): min=1500, max=11324k, avg=75094.75, stdev=466182.08 00:12:24.518 clat (usec): min=1275, max=36869, avg=9979.40, stdev=4637.94 00:12:24.518 lat (usec): min=2072, max=36879, avg=10054.49, stdev=4653.79 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 3916], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6849], 00:12:24.518 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9896], 00:12:24.518 | 70.00th=[10552], 80.00th=[11469], 90.00th=[13566], 95.00th=[20317], 00:12:24.518 | 99.00th=[32113], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:12:24.518 | 99.99th=[36963] 00:12:24.518 bw ( KiB/s): min=22304, max=22752, per=23.92%, avg=22528.00, stdev=316.78, samples=2 00:12:24.518 iops : min= 5576, max= 5688, avg=5632.00, stdev=79.20, samples=2 00:12:24.518 lat (msec) : 2=0.01%, 4=0.92%, 10=55.88%, 20=35.31%, 50=7.06% 00:12:24.518 lat (msec) : 100=0.82% 00:12:24.518 cpu : usr=2.60%, sys=5.49%, ctx=531, majf=0, minf=1 00:12:24.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:24.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.518 issued rwts: total=5632,5657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.518 job1: (groupid=0, jobs=1): err= 0: pid=1321230: Tue Nov 26 07:21:52 2024 00:12:24.518 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:12:24.518 slat (nsec): min=904, max=8092.6k, avg=74292.50, stdev=487474.32 00:12:24.518 clat (usec): min=1887, max=28561, avg=9874.93, stdev=3971.62 00:12:24.518 lat (usec): min=1894, max=28567, avg=9949.22, stdev=4000.59 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 3326], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 7046], 00:12:24.518 | 30.00th=[ 7439], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9765], 00:12:24.518 | 70.00th=[10945], 80.00th=[13304], 90.00th=[15664], 95.00th=[16909], 00:12:24.518 | 99.00th=[21365], 99.50th=[23200], 99.90th=[28443], 99.95th=[28443], 00:12:24.518 | 99.99th=[28443] 00:12:24.518 write: IOPS=6892, BW=26.9MiB/s (28.2MB/s)(27.1MiB/1006msec); 0 zone resets 00:12:24.518 slat (nsec): min=1489, max=6323.6k, avg=63259.50, stdev=418114.95 00:12:24.518 clat (usec): min=1208, max=26102, avg=8888.76, stdev=3560.03 00:12:24.518 lat (usec): min=1236, max=26104, avg=8952.02, stdev=3584.40 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 1991], 5.00th=[ 3818], 10.00th=[ 4752], 20.00th=[ 5932], 00:12:24.518 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 9634], 00:12:24.518 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13042], 95.00th=[15270], 00:12:24.518 | 99.00th=[19792], 99.50th=[20841], 99.90th=[23462], 99.95th=[23462], 00:12:24.518 | 99.99th=[26084] 00:12:24.518 bw ( KiB/s): min=25784, max=28672, per=28.91%, avg=27228.00, stdev=2042.12, samples=2 00:12:24.518 iops : min= 6446, max= 7168, avg=6807.00, stdev=510.53, samples=2 00:12:24.518 lat (msec) : 2=0.63%, 4=3.01%, 10=60.63%, 20=34.19%, 50=1.53% 00:12:24.518 cpu : usr=4.58%, sys=6.87%, ctx=470, majf=0, minf=1 00:12:24.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:24.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.518 issued rwts: total=6656,6934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.518 job2: (groupid=0, jobs=1): err= 0: pid=1321231: Tue Nov 26 07:21:52 2024 00:12:24.518 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:12:24.518 slat (nsec): min=948, max=8098.8k, avg=94713.77, stdev=528850.78 00:12:24.518 clat (usec): min=5838, max=48373, avg=12153.09, stdev=4016.77 00:12:24.518 lat (usec): min=5840, max=48376, avg=12247.81, stdev=4049.66 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 8291], 20.00th=[ 8979], 00:12:24.518 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[11469], 60.00th=[12518], 00:12:24.518 | 70.00th=[13698], 80.00th=[14615], 90.00th=[16450], 95.00th=[19530], 00:12:24.518 | 99.00th=[25822], 99.50th=[26870], 99.90th=[29754], 99.95th=[48497], 00:12:24.518 | 99.99th=[48497] 00:12:24.518 write: IOPS=5441, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1004msec); 0 zone resets 00:12:24.518 slat (nsec): min=1536, max=6698.4k, avg=88249.23, stdev=468786.00 00:12:24.518 clat (usec): min=1397, max=40608, avg=11838.56, stdev=5347.51 00:12:24.518 lat (usec): min=1405, max=40614, avg=11926.81, stdev=5385.26 00:12:24.518 clat percentiles (usec): 00:12:24.518 | 1.00th=[ 3818], 5.00th=[ 5932], 10.00th=[ 6980], 20.00th=[ 8586], 00:12:24.518 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[10945], 60.00th=[11994], 00:12:24.518 | 70.00th=[12780], 80.00th=[13960], 90.00th=[16909], 95.00th=[22152], 00:12:24.518 | 99.00th=[35914], 99.50th=[40109], 99.90th=[40633], 99.95th=[40633], 00:12:24.518 | 99.99th=[40633] 00:12:24.518 bw ( KiB/s): min=20440, max=22248, per=22.66%, avg=21344.00, stdev=1278.45, samples=2 00:12:24.518 iops : min= 5110, max= 5562, avg=5336.00, stdev=319.61, samples=2 00:12:24.518 lat (msec) : 2=0.09%, 4=0.60%, 10=38.62%, 20=55.25%, 50=5.44% 00:12:24.518 cpu : usr=2.99%, sys=5.98%, ctx=518, majf=0, minf=1 00:12:24.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:24.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.518 issued rwts: total=5120,5463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.518 job3: (groupid=0, jobs=1): err= 0: pid=1321232: Tue Nov 26 07:21:52 2024 00:12:24.519 read: IOPS=5188, BW=20.3MiB/s (21.3MB/s)(20.3MiB/1002msec) 00:12:24.519 slat (nsec): min=930, max=26731k, avg=100902.26, stdev=642121.18 00:12:24.519 clat (usec): min=1224, max=46118, avg=12581.38, stdev=6604.94 00:12:24.519 lat (usec): min=1551, max=46119, avg=12682.28, stdev=6632.95 00:12:24.519 clat percentiles (usec): 00:12:24.519 | 1.00th=[ 5211], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8586], 00:12:24.519 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10814], 00:12:24.519 | 70.00th=[12911], 80.00th=[15533], 90.00th=[18482], 95.00th=[24773], 00:12:24.519 | 99.00th=[39060], 99.50th=[40633], 99.90th=[44827], 99.95th=[45876], 00:12:24.519 | 99.99th=[45876] 00:12:24.519 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:12:24.519 slat (nsec): min=1538, max=10402k, avg=80563.17, stdev=407833.70 00:12:24.519 clat (usec): min=4502, max=34659, avg=10830.05, stdev=3646.34 00:12:24.519 lat (usec): min=4506, max=34661, avg=10910.61, stdev=3648.90 00:12:24.519 clat percentiles (usec): 00:12:24.519 | 1.00th=[ 5669], 5.00th=[ 7046], 10.00th=[ 7635], 20.00th=[ 8225], 00:12:24.519 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:12:24.519 | 70.00th=[11469], 80.00th=[13304], 90.00th=[16581], 95.00th=[17695], 00:12:24.519 | 99.00th=[20055], 99.50th=[33162], 99.90th=[34341], 99.95th=[34341], 00:12:24.519 | 99.99th=[34866] 00:12:24.519 bw ( KiB/s): min=19216, max=25456, per=23.72%, avg=22336.00, stdev=4412.35, samples=2 00:12:24.519 iops : min= 4804, max= 6364, avg=5584.00, stdev=1103.09, samples=2 00:12:24.519 lat (msec) : 2=0.14%, 4=0.09%, 10=52.05%, 20=43.23%, 50=4.50% 00:12:24.519 cpu : usr=2.60%, sys=4.50%, ctx=687, majf=0, minf=1 00:12:24.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:24.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:24.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:24.519 issued rwts: total=5199,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:24.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:24.519 00:12:24.519 Run status group 0 (all jobs): 00:12:24.519 READ: bw=87.8MiB/s (92.0MB/s), 19.9MiB/s-25.8MiB/s (20.9MB/s-27.1MB/s), io=88.3MiB (92.6MB), run=1002-1006msec 00:12:24.519 WRITE: bw=92.0MiB/s (96.4MB/s), 21.3MiB/s-26.9MiB/s (22.3MB/s-28.2MB/s), io=92.5MiB (97.0MB), run=1002-1006msec 00:12:24.519 00:12:24.519 Disk stats (read/write): 00:12:24.519 nvme0n1: ios=4700/5120, merge=0/0, ticks=16803/15069, in_queue=31872, util=86.57% 00:12:24.519 nvme0n2: ios=5659/5810, merge=0/0, ticks=32112/29468, in_queue=61580, util=86.22% 00:12:24.519 nvme0n3: ios=4096/4302, merge=0/0, ticks=16449/15597, in_queue=32046, util=88.07% 00:12:24.519 nvme0n4: ios=4195/4608, merge=0/0, ticks=15719/13032, in_queue=28751, util=95.19% 00:12:24.519 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:24.519 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1321560 00:12:24.519 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:24.519 07:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:24.519 [global] 00:12:24.519 thread=1 00:12:24.519 invalidate=1 00:12:24.519 rw=read 00:12:24.519 time_based=1 00:12:24.519 runtime=10 00:12:24.519 ioengine=libaio 00:12:24.519 direct=1 00:12:24.519 bs=4096 00:12:24.519 iodepth=1 00:12:24.519 norandommap=1 00:12:24.519 numjobs=1 00:12:24.519 00:12:24.519 [job0] 00:12:24.519 filename=/dev/nvme0n1 00:12:24.519 [job1] 00:12:24.519 filename=/dev/nvme0n2 00:12:24.519 [job2] 00:12:24.519 filename=/dev/nvme0n3 00:12:24.519 [job3] 00:12:24.519 filename=/dev/nvme0n4 00:12:24.519 Could not set queue depth (nvme0n1) 00:12:24.519 Could not set queue depth (nvme0n2) 00:12:24.519 Could not set queue depth (nvme0n3) 00:12:24.519 Could not set queue depth (nvme0n4) 00:12:24.780 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.780 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.780 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.780 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.780 fio-3.35 00:12:24.780 Starting 4 threads 00:12:28.084 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:28.084 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4403200, buflen=4096 00:12:28.084 fio: pid=1321757, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:28.085 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:28.085 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=270336, buflen=4096 00:12:28.085 fio: pid=1321756, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:28.085 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.085 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:28.085 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11415552, buflen=4096 00:12:28.085 fio: pid=1321754, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:28.085 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.085 07:21:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:28.085 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=937984, buflen=4096 00:12:28.085 fio: pid=1321755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:28.085 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.085 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:28.346 00:12:28.346 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1321754: Tue Nov 26 07:21:56 2024 00:12:28.346 read: IOPS=958, BW=3832KiB/s (3924kB/s)(10.9MiB/2909msec) 00:12:28.346 slat (usec): min=6, max=38816, avg=44.22, stdev=786.82 00:12:28.346 clat (usec): min=648, max=1255, avg=989.55, stdev=68.23 00:12:28.346 lat (usec): min=673, max=39891, avg=1033.78, stdev=792.05 00:12:28.346 clat percentiles (usec): 00:12:28.346 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 947], 00:12:28.346 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:12:28.346 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1106], 00:12:28.346 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1221], 99.95th=[ 1254], 00:12:28.346 | 99.99th=[ 1254] 00:12:28.346 bw ( KiB/s): min= 3808, max= 3992, per=72.87%, avg=3910.40, stdev=65.82, samples=5 00:12:28.346 iops : min= 952, max= 998, avg=977.60, stdev=16.46, samples=5 00:12:28.346 lat (usec) : 750=0.25%, 1000=56.35% 00:12:28.346 lat (msec) : 2=43.36% 00:12:28.346 cpu : usr=0.69%, sys=3.16%, ctx=2790, majf=0, minf=1 00:12:28.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 issued rwts: total=2788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.346 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1321755: Tue Nov 26 07:21:56 2024 00:12:28.346 read: IOPS=74, BW=296KiB/s (303kB/s)(916KiB/3099msec) 00:12:28.346 slat (usec): min=6, max=15086, avg=255.22, stdev=1666.60 00:12:28.346 clat (usec): min=442, max=41670, avg=13173.31, stdev=18593.90 00:12:28.346 lat (usec): min=474, max=47987, avg=13367.76, stdev=18599.28 00:12:28.346 clat percentiles (usec): 00:12:28.346 | 1.00th=[ 611], 5.00th=[ 701], 10.00th=[ 766], 20.00th=[ 816], 00:12:28.346 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[ 889], 60.00th=[ 988], 00:12:28.346 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:28.346 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:12:28.346 | 99.99th=[41681] 00:12:28.346 bw ( KiB/s): min= 104, max= 472, per=5.59%, avg=300.00, stdev=125.35, samples=6 00:12:28.346 iops : min= 26, max= 118, avg=75.00, stdev=31.34, samples=6 00:12:28.346 lat (usec) : 500=0.43%, 750=8.26%, 1000=51.30% 00:12:28.346 lat (msec) : 2=9.13%, 50=30.43% 00:12:28.346 cpu : usr=0.16%, sys=0.29%, ctx=235, majf=0, minf=2 00:12:28.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.346 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1321756: Tue Nov 26 07:21:56 2024 00:12:28.346 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(264KiB/2736msec) 00:12:28.346 slat (usec): min=26, max=15567, avg=259.25, stdev=1898.60 00:12:28.346 clat (usec): min=987, max=43082, avg=40855.56, stdev=7112.04 00:12:28.346 lat (usec): min=1014, max=56974, avg=41118.33, stdev=7382.06 00:12:28.346 clat percentiles (usec): 00:12:28.346 | 1.00th=[ 988], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:12:28.346 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:28.346 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:12:28.346 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:28.346 | 99.99th=[43254] 00:12:28.346 bw ( KiB/s): min= 96, max= 104, per=1.81%, avg=97.60, stdev= 3.58, samples=5 00:12:28.346 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:12:28.346 lat (usec) : 1000=1.49% 00:12:28.346 lat (msec) : 2=1.49%, 50=95.52% 00:12:28.346 cpu : usr=0.15%, sys=0.00%, ctx=68, majf=0, minf=2 00:12:28.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.346 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.346 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1321757: Tue Nov 26 07:21:56 2024 00:12:28.346 read: IOPS=421, BW=1686KiB/s (1726kB/s)(4300KiB/2551msec) 00:12:28.346 slat (nsec): min=7028, max=62371, avg=25003.32, stdev=5503.43 00:12:28.346 clat (usec): min=437, max=43039, avg=2317.19, stdev=7597.33 00:12:28.346 lat (usec): min=446, max=43064, avg=2342.19, stdev=7597.40 00:12:28.346 clat percentiles (usec): 00:12:28.346 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 725], 00:12:28.346 | 30.00th=[ 766], 40.00th=[ 807], 50.00th=[ 865], 60.00th=[ 955], 00:12:28.346 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1090], 00:12:28.346 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:12:28.346 | 99.99th=[43254] 00:12:28.347 bw ( KiB/s): min= 96, max= 4168, per=31.18%, avg=1673.60, stdev=1763.75, samples=5 00:12:28.347 iops : min= 24, max= 1042, avg=418.40, stdev=440.94, samples=5 00:12:28.347 lat (usec) : 500=0.19%, 750=24.54%, 1000=46.75% 00:12:28.347 lat (msec) : 2=24.91%, 50=3.53% 00:12:28.347 cpu : usr=0.55%, sys=1.14%, ctx=1076, majf=0, minf=2 00:12:28.347 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.347 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.347 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.347 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:28.347 00:12:28.347 Run status group 0 (all jobs): 00:12:28.347 READ: bw=5366KiB/s (5494kB/s), 96.5KiB/s-3832KiB/s (98.8kB/s-3924kB/s), io=16.2MiB (17.0MB), run=2551-3099msec 00:12:28.347 00:12:28.347 Disk stats (read/write): 00:12:28.347 nvme0n1: ios=2667/0, merge=0/0, ticks=2626/0, in_queue=2626, util=91.05% 00:12:28.347 nvme0n2: ios=227/0, merge=0/0, ticks=2951/0, in_queue=2951, util=93.31% 00:12:28.347 nvme0n3: ios=61/0, merge=0/0, ticks=2489/0, in_queue=2489, util=95.51% 00:12:28.347 nvme0n4: ios=1075/0, merge=0/0, ticks=2486/0, in_queue=2486, util=96.34% 00:12:28.347 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.347 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:28.607 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.607 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:28.870 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.870 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:28.870 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:28.870 07:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1321560 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:29.131 nvmf hotplug test: fio failed as expected 00:12:29.131 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.393 rmmod nvme_tcp 00:12:29.393 rmmod nvme_fabrics 00:12:29.393 rmmod nvme_keyring 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1317863 ']' 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1317863 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1317863 ']' 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1317863 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:29.393 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317863 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317863' 00:12:29.654 killing process with pid 1317863 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1317863 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1317863 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.654 07:21:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.200 00:12:32.200 real 0m29.446s 00:12:32.200 user 2m45.490s 00:12:32.200 sys 0m9.589s 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 ************************************ 00:12:32.200 END TEST nvmf_fio_target 00:12:32.200 ************************************ 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.200 ************************************ 00:12:32.200 START TEST nvmf_bdevio 00:12:32.200 ************************************ 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:32.200 * Looking for test storage... 00:12:32.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.200 07:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.200 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.201 --rc genhtml_branch_coverage=1 00:12:32.201 --rc genhtml_function_coverage=1 00:12:32.201 --rc genhtml_legend=1 00:12:32.201 --rc geninfo_all_blocks=1 00:12:32.201 --rc geninfo_unexecuted_blocks=1 00:12:32.201 00:12:32.201 ' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.201 --rc genhtml_branch_coverage=1 00:12:32.201 --rc genhtml_function_coverage=1 00:12:32.201 --rc genhtml_legend=1 00:12:32.201 --rc geninfo_all_blocks=1 00:12:32.201 --rc geninfo_unexecuted_blocks=1 00:12:32.201 00:12:32.201 ' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.201 --rc genhtml_branch_coverage=1 00:12:32.201 --rc genhtml_function_coverage=1 00:12:32.201 --rc genhtml_legend=1 00:12:32.201 --rc geninfo_all_blocks=1 00:12:32.201 --rc geninfo_unexecuted_blocks=1 00:12:32.201 00:12:32.201 ' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.201 --rc genhtml_branch_coverage=1 00:12:32.201 --rc genhtml_function_coverage=1 00:12:32.201 --rc genhtml_legend=1 00:12:32.201 --rc geninfo_all_blocks=1 00:12:32.201 --rc geninfo_unexecuted_blocks=1 00:12:32.201 00:12:32.201 ' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.201 07:22:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:40.345 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:40.345 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:40.345 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:40.345 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.345 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:12:40.346 00:12:40.346 --- 10.0.0.2 ping statistics --- 00:12:40.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.346 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:12:40.346 00:12:40.346 --- 10.0.0.1 ping statistics --- 00:12:40.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.346 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1326918 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1326918 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1326918 ']' 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.346 07:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.346 [2024-11-26 07:22:07.691899] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:12:40.346 [2024-11-26 07:22:07.691970] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.346 [2024-11-26 07:22:07.792357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.346 [2024-11-26 07:22:07.845743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.346 [2024-11-26 07:22:07.845792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.346 [2024-11-26 07:22:07.845801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.346 [2024-11-26 07:22:07.845808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.346 [2024-11-26 07:22:07.845814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.346 [2024-11-26 07:22:07.847763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:40.346 [2024-11-26 07:22:07.847923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:40.346 [2024-11-26 07:22:07.848082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:40.346 [2024-11-26 07:22:07.848083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 [2024-11-26 07:22:08.576319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 Malloc0 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:40.607 [2024-11-26 07:22:08.651185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:40.607 { 00:12:40.607 "params": { 00:12:40.607 "name": "Nvme$subsystem", 00:12:40.607 "trtype": "$TEST_TRANSPORT", 00:12:40.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:40.607 "adrfam": "ipv4", 00:12:40.607 "trsvcid": "$NVMF_PORT", 00:12:40.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:40.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:40.607 "hdgst": ${hdgst:-false}, 00:12:40.607 "ddgst": ${ddgst:-false} 00:12:40.607 }, 00:12:40.607 "method": "bdev_nvme_attach_controller" 00:12:40.607 } 00:12:40.607 EOF 00:12:40.607 )") 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:40.607 07:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:40.607 "params": { 00:12:40.607 "name": "Nvme1", 00:12:40.607 "trtype": "tcp", 00:12:40.607 "traddr": "10.0.0.2", 00:12:40.607 "adrfam": "ipv4", 00:12:40.607 "trsvcid": "4420", 00:12:40.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.607 "hdgst": false, 00:12:40.607 "ddgst": false 00:12:40.607 }, 00:12:40.607 "method": "bdev_nvme_attach_controller" 00:12:40.607 }' 00:12:40.867 [2024-11-26 07:22:08.710601] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:12:40.867 [2024-11-26 07:22:08.710669] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327146 ] 00:12:40.867 [2024-11-26 07:22:08.806045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:40.867 [2024-11-26 07:22:08.862143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.867 [2024-11-26 07:22:08.862305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.867 [2024-11-26 07:22:08.862442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.156 I/O targets: 00:12:41.156 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:41.156 00:12:41.156 00:12:41.156 CUnit - A unit testing framework for C - Version 2.1-3 00:12:41.156 http://cunit.sourceforge.net/ 00:12:41.156 00:12:41.156 00:12:41.156 Suite: bdevio tests on: Nvme1n1 00:12:41.156 Test: blockdev write read block ...passed 00:12:41.156 Test: blockdev write zeroes read block ...passed 00:12:41.156 Test: blockdev write zeroes read no split ...passed 00:12:41.156 Test: blockdev write zeroes read split ...passed 00:12:41.156 Test: blockdev write zeroes read split partial ...passed 00:12:41.156 Test: blockdev reset ...[2024-11-26 07:22:09.168559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:41.156 [2024-11-26 07:22:09.168664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6e970 (9): Bad file descriptor 00:12:41.156 [2024-11-26 07:22:09.191293] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:41.156 passed 00:12:41.156 Test: blockdev write read 8 blocks ...passed 00:12:41.156 Test: blockdev write read size > 128k ...passed 00:12:41.156 Test: blockdev write read invalid size ...passed 00:12:41.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:41.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:41.417 Test: blockdev write read max offset ...passed 00:12:41.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:41.417 Test: blockdev writev readv 8 blocks ...passed 00:12:41.417 Test: blockdev writev readv 30 x 1block ...passed 00:12:41.417 Test: blockdev writev readv block ...passed 00:12:41.417 Test: blockdev writev readv size > 128k ...passed 00:12:41.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:41.417 Test: blockdev comparev and writev ...[2024-11-26 07:22:09.458124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.458182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:41.417 [2024-11-26 07:22:09.458200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.458209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:41.417 [2024-11-26 07:22:09.458727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.458743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:41.417 [2024-11-26 07:22:09.458757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.458767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:41.417 [2024-11-26 07:22:09.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.459290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:41.417 [2024-11-26 07:22:09.459304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.417 [2024-11-26 07:22:09.459311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:41.418 [2024-11-26 07:22:09.459863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.418 [2024-11-26 07:22:09.459879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:41.418 [2024-11-26 07:22:09.459893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:41.418 [2024-11-26 07:22:09.459902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:41.418 passed 00:12:41.678 Test: blockdev nvme passthru rw ...passed 00:12:41.678 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:22:09.545007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:41.678 [2024-11-26 07:22:09.545028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:41.678 [2024-11-26 07:22:09.545379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:41.678 [2024-11-26 07:22:09.545396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:41.678 [2024-11-26 07:22:09.545769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:41.678 [2024-11-26 07:22:09.545784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:41.678 [2024-11-26 07:22:09.546170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:41.678 [2024-11-26 07:22:09.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:41.678 passed 00:12:41.678 Test: blockdev nvme admin passthru ...passed 00:12:41.678 Test: blockdev copy ...passed 00:12:41.678 00:12:41.678 Run Summary: Type Total Ran Passed Failed Inactive 00:12:41.678 suites 1 1 n/a 0 0 00:12:41.678 tests 23 23 23 0 0 00:12:41.678 asserts 152 152 152 0 n/a 00:12:41.678 00:12:41.678 Elapsed time = 1.145 seconds 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.678 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.678 rmmod nvme_tcp 00:12:41.939 rmmod nvme_fabrics 00:12:41.939 rmmod nvme_keyring 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1326918 ']' 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1326918 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1326918 ']' 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1326918 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1326918 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1326918' 00:12:41.939 killing process with pid 1326918 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1326918 00:12:41.939 07:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1326918 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.200 07:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.117 00:12:44.117 real 0m12.329s 00:12:44.117 user 0m13.119s 00:12:44.117 sys 0m6.297s 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:44.117 ************************************ 00:12:44.117 END TEST nvmf_bdevio 00:12:44.117 ************************************ 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:44.117 00:12:44.117 real 5m4.652s 00:12:44.117 user 11m51.462s 00:12:44.117 sys 1m50.670s 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.117 07:22:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:44.117 ************************************ 00:12:44.117 END TEST nvmf_target_core 00:12:44.117 ************************************ 00:12:44.378 07:22:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:44.378 07:22:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.378 07:22:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.378 07:22:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.378 ************************************ 00:12:44.378 START TEST nvmf_target_extra 00:12:44.378 ************************************ 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:44.378 * Looking for test storage... 00:12:44.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.378 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.379 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.640 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.640 --rc genhtml_branch_coverage=1 00:12:44.640 --rc genhtml_function_coverage=1 00:12:44.640 --rc genhtml_legend=1 00:12:44.640 --rc geninfo_all_blocks=1 00:12:44.641 --rc geninfo_unexecuted_blocks=1 00:12:44.641 00:12:44.641 ' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.641 --rc genhtml_branch_coverage=1 00:12:44.641 --rc genhtml_function_coverage=1 00:12:44.641 --rc genhtml_legend=1 00:12:44.641 --rc geninfo_all_blocks=1 00:12:44.641 --rc geninfo_unexecuted_blocks=1 00:12:44.641 00:12:44.641 ' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.641 --rc genhtml_branch_coverage=1 00:12:44.641 --rc genhtml_function_coverage=1 00:12:44.641 --rc genhtml_legend=1 00:12:44.641 --rc geninfo_all_blocks=1 00:12:44.641 --rc geninfo_unexecuted_blocks=1 00:12:44.641 00:12:44.641 ' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.641 --rc genhtml_branch_coverage=1 00:12:44.641 --rc genhtml_function_coverage=1 00:12:44.641 --rc genhtml_legend=1 00:12:44.641 --rc geninfo_all_blocks=1 00:12:44.641 --rc geninfo_unexecuted_blocks=1 00:12:44.641 00:12:44.641 ' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.641 ************************************ 00:12:44.641 START TEST nvmf_example 00:12:44.641 ************************************ 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:44.641 * Looking for test storage... 00:12:44.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.641 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.904 --rc genhtml_branch_coverage=1 00:12:44.904 --rc genhtml_function_coverage=1 00:12:44.904 --rc genhtml_legend=1 00:12:44.904 --rc geninfo_all_blocks=1 00:12:44.904 --rc geninfo_unexecuted_blocks=1 00:12:44.904 00:12:44.904 ' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.904 --rc genhtml_branch_coverage=1 00:12:44.904 --rc genhtml_function_coverage=1 00:12:44.904 --rc genhtml_legend=1 00:12:44.904 --rc geninfo_all_blocks=1 00:12:44.904 --rc geninfo_unexecuted_blocks=1 00:12:44.904 00:12:44.904 ' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.904 --rc genhtml_branch_coverage=1 00:12:44.904 --rc genhtml_function_coverage=1 00:12:44.904 --rc genhtml_legend=1 00:12:44.904 --rc geninfo_all_blocks=1 00:12:44.904 --rc geninfo_unexecuted_blocks=1 00:12:44.904 00:12:44.904 ' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.904 --rc genhtml_branch_coverage=1 00:12:44.904 --rc genhtml_function_coverage=1 00:12:44.904 --rc genhtml_legend=1 00:12:44.904 --rc geninfo_all_blocks=1 00:12:44.904 --rc geninfo_unexecuted_blocks=1 00:12:44.904 00:12:44.904 ' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.904 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.905 07:22:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.051 07:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:53.051 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:53.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:53.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:53.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:53.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.052 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:12:53.053 00:12:53.053 --- 10.0.0.2 ping statistics --- 00:12:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.053 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:12:53.053 00:12:53.053 --- 10.0.0.1 ping statistics --- 00:12:53.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.053 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1331802 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1331802 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1331802 ']' 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.053 07:22:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.315 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:53.577 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.577 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:53.577 07:22:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:03.580 Initializing NVMe Controllers 00:13:03.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:03.580 Initialization complete. Launching workers. 00:13:03.580 ======================================================== 00:13:03.580 Latency(us) 00:13:03.580 Device Information : IOPS MiB/s Average min max 00:13:03.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19132.52 74.74 3344.84 635.64 16181.04 00:13:03.580 ======================================================== 00:13:03.580 Total : 19132.52 74.74 3344.84 635.64 16181.04 00:13:03.580 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:03.580 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:03.580 rmmod nvme_tcp 00:13:03.580 rmmod nvme_fabrics 00:13:03.841 rmmod nvme_keyring 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1331802 ']' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1331802 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1331802 ']' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1331802 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1331802 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1331802' 00:13:03.841 killing process with pid 1331802 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1331802 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1331802 00:13:03.841 nvmf threads initialize successfully 00:13:03.841 bdev subsystem init successfully 00:13:03.841 created a nvmf target service 00:13:03.841 create targets's poll groups done 00:13:03.841 all subsystems of target started 00:13:03.841 nvmf target is running 00:13:03.841 all subsystems of target stopped 00:13:03.841 destroy targets's poll groups done 00:13:03.841 destroyed the nvmf target service 00:13:03.841 bdev subsystem finish successfully 00:13:03.841 nvmf threads destroy successfully 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.841 07:22:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.389 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.389 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:06.389 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.389 07:22:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:06.389 00:13:06.389 real 0m21.459s 00:13:06.389 user 0m46.469s 00:13:06.389 sys 0m7.156s 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:06.389 ************************************ 00:13:06.389 END TEST nvmf_example 00:13:06.389 ************************************ 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.389 ************************************ 00:13:06.389 START TEST nvmf_filesystem 00:13:06.389 ************************************ 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:06.389 * Looking for test storage... 00:13:06.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.389 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.389 --rc genhtml_branch_coverage=1 00:13:06.389 --rc genhtml_function_coverage=1 00:13:06.389 --rc genhtml_legend=1 00:13:06.389 --rc geninfo_all_blocks=1 00:13:06.389 --rc geninfo_unexecuted_blocks=1 00:13:06.389 00:13:06.389 ' 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.390 --rc genhtml_branch_coverage=1 00:13:06.390 --rc genhtml_function_coverage=1 00:13:06.390 --rc genhtml_legend=1 00:13:06.390 --rc geninfo_all_blocks=1 00:13:06.390 --rc geninfo_unexecuted_blocks=1 00:13:06.390 00:13:06.390 ' 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.390 --rc genhtml_branch_coverage=1 00:13:06.390 --rc genhtml_function_coverage=1 00:13:06.390 --rc genhtml_legend=1 00:13:06.390 --rc geninfo_all_blocks=1 00:13:06.390 --rc geninfo_unexecuted_blocks=1 00:13:06.390 00:13:06.390 ' 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.390 --rc genhtml_branch_coverage=1 00:13:06.390 --rc genhtml_function_coverage=1 00:13:06.390 --rc genhtml_legend=1 00:13:06.390 --rc geninfo_all_blocks=1 00:13:06.390 --rc geninfo_unexecuted_blocks=1 00:13:06.390 00:13:06.390 ' 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:06.390 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:06.391 #define SPDK_CONFIG_H 00:13:06.391 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:06.391 #define SPDK_CONFIG_APPS 1 00:13:06.391 #define SPDK_CONFIG_ARCH native 00:13:06.391 #undef SPDK_CONFIG_ASAN 00:13:06.391 #undef SPDK_CONFIG_AVAHI 00:13:06.391 #undef SPDK_CONFIG_CET 00:13:06.391 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:06.391 #define SPDK_CONFIG_COVERAGE 1 00:13:06.391 #define SPDK_CONFIG_CROSS_PREFIX 00:13:06.391 #undef SPDK_CONFIG_CRYPTO 00:13:06.391 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:06.391 #undef SPDK_CONFIG_CUSTOMOCF 00:13:06.391 #undef SPDK_CONFIG_DAOS 00:13:06.391 #define SPDK_CONFIG_DAOS_DIR 00:13:06.391 #define SPDK_CONFIG_DEBUG 1 00:13:06.391 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:06.391 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:06.391 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:06.391 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:06.391 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:06.391 #undef SPDK_CONFIG_DPDK_UADK 00:13:06.391 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:06.391 #define SPDK_CONFIG_EXAMPLES 1 00:13:06.391 #undef SPDK_CONFIG_FC 00:13:06.391 #define SPDK_CONFIG_FC_PATH 00:13:06.391 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:06.391 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:06.391 #define SPDK_CONFIG_FSDEV 1 00:13:06.391 #undef SPDK_CONFIG_FUSE 00:13:06.391 #undef SPDK_CONFIG_FUZZER 00:13:06.391 #define SPDK_CONFIG_FUZZER_LIB 00:13:06.391 #undef SPDK_CONFIG_GOLANG 00:13:06.391 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:06.391 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:06.391 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:06.391 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:06.391 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:06.391 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:06.391 #undef SPDK_CONFIG_HAVE_LZ4 00:13:06.391 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:06.391 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:06.391 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:06.391 #define SPDK_CONFIG_IDXD 1 00:13:06.391 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:06.391 #undef SPDK_CONFIG_IPSEC_MB 00:13:06.391 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:06.391 #define SPDK_CONFIG_ISAL 1 00:13:06.391 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:06.391 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:06.391 #define SPDK_CONFIG_LIBDIR 00:13:06.391 #undef SPDK_CONFIG_LTO 00:13:06.391 #define SPDK_CONFIG_MAX_LCORES 128 00:13:06.391 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:06.391 #define SPDK_CONFIG_NVME_CUSE 1 00:13:06.391 #undef SPDK_CONFIG_OCF 00:13:06.391 #define SPDK_CONFIG_OCF_PATH 00:13:06.391 #define SPDK_CONFIG_OPENSSL_PATH 00:13:06.391 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:06.391 #define SPDK_CONFIG_PGO_DIR 00:13:06.391 #undef SPDK_CONFIG_PGO_USE 00:13:06.391 #define SPDK_CONFIG_PREFIX /usr/local 00:13:06.391 #undef SPDK_CONFIG_RAID5F 00:13:06.391 #undef SPDK_CONFIG_RBD 00:13:06.391 #define SPDK_CONFIG_RDMA 1 00:13:06.391 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:06.391 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:06.391 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:06.391 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:06.391 #define SPDK_CONFIG_SHARED 1 00:13:06.391 #undef SPDK_CONFIG_SMA 00:13:06.391 #define SPDK_CONFIG_TESTS 1 00:13:06.391 #undef SPDK_CONFIG_TSAN 00:13:06.391 #define SPDK_CONFIG_UBLK 1 00:13:06.391 #define SPDK_CONFIG_UBSAN 1 00:13:06.391 #undef SPDK_CONFIG_UNIT_TESTS 00:13:06.391 #undef SPDK_CONFIG_URING 00:13:06.391 #define SPDK_CONFIG_URING_PATH 00:13:06.391 #undef SPDK_CONFIG_URING_ZNS 00:13:06.391 #undef SPDK_CONFIG_USDT 00:13:06.391 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:06.391 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:06.391 #define SPDK_CONFIG_VFIO_USER 1 00:13:06.391 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:06.391 #define SPDK_CONFIG_VHOST 1 00:13:06.391 #define SPDK_CONFIG_VIRTIO 1 00:13:06.391 #undef SPDK_CONFIG_VTUNE 00:13:06.391 #define SPDK_CONFIG_VTUNE_DIR 00:13:06.391 #define SPDK_CONFIG_WERROR 1 00:13:06.391 #define SPDK_CONFIG_WPDK_DIR 00:13:06.391 #undef SPDK_CONFIG_XNVME 00:13:06.391 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:06.391 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:06.392 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:06.393 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1334545 ]] 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1334545 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.p5bnxO 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.p5bnxO/tests/target /tmp/spdk.p5bnxO 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118306316288 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11050192896 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677310464 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=946176 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:06.394 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:06.395 * Looking for test storage... 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118306316288 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13264785408 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.395 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:06.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.657 --rc genhtml_branch_coverage=1 00:13:06.657 --rc genhtml_function_coverage=1 00:13:06.657 --rc genhtml_legend=1 00:13:06.657 --rc geninfo_all_blocks=1 00:13:06.657 --rc geninfo_unexecuted_blocks=1 00:13:06.657 00:13:06.657 ' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:06.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.657 --rc genhtml_branch_coverage=1 00:13:06.657 --rc genhtml_function_coverage=1 00:13:06.657 --rc genhtml_legend=1 00:13:06.657 --rc geninfo_all_blocks=1 00:13:06.657 --rc geninfo_unexecuted_blocks=1 00:13:06.657 00:13:06.657 ' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:06.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.657 --rc genhtml_branch_coverage=1 00:13:06.657 --rc genhtml_function_coverage=1 00:13:06.657 --rc genhtml_legend=1 00:13:06.657 --rc geninfo_all_blocks=1 00:13:06.657 --rc geninfo_unexecuted_blocks=1 00:13:06.657 00:13:06.657 ' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:06.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.657 --rc genhtml_branch_coverage=1 00:13:06.657 --rc genhtml_function_coverage=1 00:13:06.657 --rc genhtml_legend=1 00:13:06.657 --rc geninfo_all_blocks=1 00:13:06.657 --rc geninfo_unexecuted_blocks=1 00:13:06.657 00:13:06.657 ' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.657 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:06.658 07:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:14.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:14.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:14.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:14.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.796 07:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:14.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:13:14.796 00:13:14.796 --- 10.0.0.2 ping statistics --- 00:13:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.796 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:13:14.796 00:13:14.796 --- 10.0.0.1 ping statistics --- 00:13:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.796 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:14.796 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 ************************************ 00:13:14.797 START TEST nvmf_filesystem_no_in_capsule 00:13:14.797 ************************************ 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1338301 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1338301 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1338301 ']' 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.797 07:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.797 [2024-11-26 07:22:42.299351] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:13:14.797 [2024-11-26 07:22:42.299411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.797 [2024-11-26 07:22:42.401814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.797 [2024-11-26 07:22:42.455286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.797 [2024-11-26 07:22:42.455332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.797 [2024-11-26 07:22:42.455340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.797 [2024-11-26 07:22:42.455347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.797 [2024-11-26 07:22:42.455353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.797 [2024-11-26 07:22:42.457755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.797 [2024-11-26 07:22:42.457917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.797 [2024-11-26 07:22:42.458080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.797 [2024-11-26 07:22:42.458080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.058 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.058 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:15.058 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.058 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.058 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 [2024-11-26 07:22:43.180576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 [2024-11-26 07:22:43.333972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.319 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:15.319 { 00:13:15.319 "name": "Malloc1", 00:13:15.319 "aliases": [ 00:13:15.319 "779f4cda-8558-49bd-b1de-dd981eb8c1e7" 00:13:15.319 ], 00:13:15.319 "product_name": "Malloc disk", 00:13:15.319 "block_size": 512, 00:13:15.319 "num_blocks": 1048576, 00:13:15.319 "uuid": "779f4cda-8558-49bd-b1de-dd981eb8c1e7", 00:13:15.319 "assigned_rate_limits": { 00:13:15.319 "rw_ios_per_sec": 0, 00:13:15.319 "rw_mbytes_per_sec": 0, 00:13:15.319 "r_mbytes_per_sec": 0, 00:13:15.319 "w_mbytes_per_sec": 0 00:13:15.319 }, 00:13:15.319 "claimed": true, 00:13:15.319 "claim_type": "exclusive_write", 00:13:15.319 "zoned": false, 00:13:15.319 "supported_io_types": { 00:13:15.319 "read": true, 00:13:15.319 "write": true, 00:13:15.319 "unmap": true, 00:13:15.319 "flush": true, 00:13:15.319 "reset": true, 00:13:15.319 "nvme_admin": false, 00:13:15.319 "nvme_io": false, 00:13:15.319 "nvme_io_md": false, 00:13:15.319 "write_zeroes": true, 00:13:15.319 "zcopy": true, 00:13:15.319 "get_zone_info": false, 00:13:15.319 "zone_management": false, 00:13:15.319 "zone_append": false, 00:13:15.319 "compare": false, 00:13:15.320 "compare_and_write": false, 00:13:15.320 "abort": true, 00:13:15.320 "seek_hole": false, 00:13:15.320 "seek_data": false, 00:13:15.320 "copy": true, 00:13:15.320 "nvme_iov_md": false 00:13:15.320 }, 00:13:15.320 "memory_domains": [ 00:13:15.320 { 00:13:15.320 "dma_device_id": "system", 00:13:15.320 "dma_device_type": 1 00:13:15.320 }, 00:13:15.320 { 00:13:15.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.320 "dma_device_type": 2 00:13:15.320 } 00:13:15.320 ], 00:13:15.320 "driver_specific": {} 00:13:15.320 } 00:13:15.320 ]' 00:13:15.320 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:15.320 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:15.581 07:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.968 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.968 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.968 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.968 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.968 07:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:18.900 07:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:19.161 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:19.162 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:19.423 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:19.423 07:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.806 ************************************ 00:13:20.806 START TEST filesystem_ext4 00:13:20.806 ************************************ 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:20.806 mke2fs 1.47.0 (5-Feb-2023) 00:13:20.806 Discarding device blocks: 0/522240 done 00:13:20.806 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:20.806 Filesystem UUID: 8efe6339-521f-4e41-a55c-75fb5a948ca7 00:13:20.806 Superblock backups stored on blocks: 00:13:20.806 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:20.806 00:13:20.806 Allocating group tables: 0/64 done 00:13:20.806 Writing inode tables: 0/64 done 00:13:20.806 Creating journal (8192 blocks): done 00:13:20.806 Writing superblocks and filesystem accounting information: 0/64 done 00:13:20.806 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:20.806 07:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:27.386 07:22:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1338301 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:27.386 00:13:27.386 real 0m6.584s 00:13:27.386 user 0m0.035s 00:13:27.386 sys 0m0.069s 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:27.386 ************************************ 00:13:27.386 END TEST filesystem_ext4 00:13:27.386 ************************************ 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:27.386 ************************************ 00:13:27.386 START TEST filesystem_btrfs 00:13:27.386 ************************************ 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:27.386 btrfs-progs v6.8.1 00:13:27.386 See https://btrfs.readthedocs.io for more information. 00:13:27.386 00:13:27.386 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:27.386 NOTE: several default settings have changed in version 5.15, please make sure 00:13:27.386 this does not affect your deployments: 00:13:27.386 - DUP for metadata (-m dup) 00:13:27.386 - enabled no-holes (-O no-holes) 00:13:27.386 - enabled free-space-tree (-R free-space-tree) 00:13:27.386 00:13:27.386 Label: (null) 00:13:27.386 UUID: c9f9f20b-18b8-45b5-923e-fd404fca274c 00:13:27.386 Node size: 16384 00:13:27.386 Sector size: 4096 (CPU page size: 4096) 00:13:27.386 Filesystem size: 510.00MiB 00:13:27.386 Block group profiles: 00:13:27.386 Data: single 8.00MiB 00:13:27.386 Metadata: DUP 32.00MiB 00:13:27.386 System: DUP 8.00MiB 00:13:27.386 SSD detected: yes 00:13:27.386 Zoned device: no 00:13:27.386 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:27.386 Checksum: crc32c 00:13:27.386 Number of devices: 1 00:13:27.386 Devices: 00:13:27.386 ID SIZE PATH 00:13:27.386 1 510.00MiB /dev/nvme0n1p1 00:13:27.386 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:27.386 07:22:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1338301 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:28.801 00:13:28.801 real 0m1.442s 00:13:28.801 user 0m0.024s 00:13:28.801 sys 0m0.125s 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:28.801 ************************************ 00:13:28.801 END TEST filesystem_btrfs 00:13:28.801 ************************************ 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:28.801 ************************************ 00:13:28.801 START TEST filesystem_xfs 00:13:28.801 ************************************ 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:28.801 07:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:28.801 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:28.801 = sectsz=512 attr=2, projid32bit=1 00:13:28.801 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:28.801 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:28.801 data = bsize=4096 blocks=130560, imaxpct=25 00:13:28.801 = sunit=0 swidth=0 blks 00:13:28.801 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:28.801 log =internal log bsize=4096 blocks=16384, version=2 00:13:28.801 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:28.801 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:29.829 Discarding blocks...Done. 00:13:29.829 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:29.829 07:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1338301 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:32.383 00:13:32.383 real 0m3.560s 00:13:32.383 user 0m0.028s 00:13:32.383 sys 0m0.078s 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:32.383 ************************************ 00:13:32.383 END TEST filesystem_xfs 00:13:32.383 ************************************ 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:32.383 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1338301 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1338301 ']' 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1338301 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1338301 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1338301' 00:13:32.645 killing process with pid 1338301 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1338301 00:13:32.645 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1338301 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:32.906 00:13:32.906 real 0m18.594s 00:13:32.906 user 1m13.384s 00:13:32.906 sys 0m1.497s 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 ************************************ 00:13:32.906 END TEST nvmf_filesystem_no_in_capsule 00:13:32.906 ************************************ 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 ************************************ 00:13:32.906 START TEST nvmf_filesystem_in_capsule 00:13:32.906 ************************************ 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1342226 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1342226 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1342226 ']' 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.906 07:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.906 [2024-11-26 07:23:00.972194] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:13:32.906 [2024-11-26 07:23:00.972250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.167 [2024-11-26 07:23:01.065667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.167 [2024-11-26 07:23:01.100323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.167 [2024-11-26 07:23:01.100354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.167 [2024-11-26 07:23:01.100360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.167 [2024-11-26 07:23:01.100368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.167 [2024-11-26 07:23:01.100372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.167 [2024-11-26 07:23:01.101711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.167 [2024-11-26 07:23:01.101864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.167 [2024-11-26 07:23:01.102015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.167 [2024-11-26 07:23:01.102016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.740 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.740 [2024-11-26 07:23:01.829068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 [2024-11-26 07:23:01.958342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:34.002 { 00:13:34.002 "name": "Malloc1", 00:13:34.002 "aliases": [ 00:13:34.002 "6d1faf7d-7613-4be8-85d5-ec8430cd1d4a" 00:13:34.002 ], 00:13:34.002 "product_name": "Malloc disk", 00:13:34.002 "block_size": 512, 00:13:34.002 "num_blocks": 1048576, 00:13:34.002 "uuid": "6d1faf7d-7613-4be8-85d5-ec8430cd1d4a", 00:13:34.002 "assigned_rate_limits": { 00:13:34.002 "rw_ios_per_sec": 0, 00:13:34.002 "rw_mbytes_per_sec": 0, 00:13:34.002 "r_mbytes_per_sec": 0, 00:13:34.002 "w_mbytes_per_sec": 0 00:13:34.002 }, 00:13:34.002 "claimed": true, 00:13:34.002 "claim_type": "exclusive_write", 00:13:34.002 "zoned": false, 00:13:34.002 "supported_io_types": { 00:13:34.002 "read": true, 00:13:34.002 "write": true, 00:13:34.002 "unmap": true, 00:13:34.002 "flush": true, 00:13:34.002 "reset": true, 00:13:34.002 "nvme_admin": false, 00:13:34.002 "nvme_io": false, 00:13:34.002 "nvme_io_md": false, 00:13:34.002 "write_zeroes": true, 00:13:34.002 "zcopy": true, 00:13:34.002 "get_zone_info": false, 00:13:34.002 "zone_management": false, 00:13:34.002 "zone_append": false, 00:13:34.002 "compare": false, 00:13:34.002 "compare_and_write": false, 00:13:34.002 "abort": true, 00:13:34.002 "seek_hole": false, 00:13:34.002 "seek_data": false, 00:13:34.002 "copy": true, 00:13:34.002 "nvme_iov_md": false 00:13:34.002 }, 00:13:34.002 "memory_domains": [ 00:13:34.002 { 00:13:34.002 "dma_device_id": "system", 00:13:34.002 "dma_device_type": 1 00:13:34.002 }, 00:13:34.002 { 00:13:34.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.002 "dma_device_type": 2 00:13:34.002 } 00:13:34.002 ], 00:13:34.002 "driver_specific": {} 00:13:34.002 } 00:13:34.002 ]' 00:13:34.002 07:23:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:34.002 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:34.003 07:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.917 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.917 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:35.917 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.917 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:35.917 07:23:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:37.832 07:23:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:38.404 07:23:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:39.346 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:39.346 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:39.346 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:39.346 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.346 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.607 ************************************ 00:13:39.607 START TEST filesystem_in_capsule_ext4 00:13:39.607 ************************************ 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:39.607 07:23:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:39.607 mke2fs 1.47.0 (5-Feb-2023) 00:13:39.607 Discarding device blocks: 0/522240 done 00:13:39.607 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:39.607 Filesystem UUID: 957d1bbc-2840-4f76-a2e1-cdae0b37f9e9 00:13:39.607 Superblock backups stored on blocks: 00:13:39.607 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:39.607 00:13:39.607 Allocating group tables: 0/64 done 00:13:39.607 Writing inode tables: 0/64 done 00:13:39.607 Creating journal (8192 blocks): done 00:13:41.937 Writing superblocks and filesystem accounting information: 0/64 done 00:13:41.937 00:13:41.937 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:41.937 07:23:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1342226 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:47.230 00:13:47.230 real 0m7.722s 00:13:47.230 user 0m0.036s 00:13:47.230 sys 0m0.074s 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 ************************************ 00:13:47.230 END TEST filesystem_in_capsule_ext4 00:13:47.230 ************************************ 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:47.230 ************************************ 00:13:47.230 START TEST filesystem_in_capsule_btrfs 00:13:47.230 ************************************ 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:47.230 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:47.803 btrfs-progs v6.8.1 00:13:47.803 See https://btrfs.readthedocs.io for more information. 00:13:47.803 00:13:47.803 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:47.804 NOTE: several default settings have changed in version 5.15, please make sure 00:13:47.804 this does not affect your deployments: 00:13:47.804 - DUP for metadata (-m dup) 00:13:47.804 - enabled no-holes (-O no-holes) 00:13:47.804 - enabled free-space-tree (-R free-space-tree) 00:13:47.804 00:13:47.804 Label: (null) 00:13:47.804 UUID: bb5e940a-ade1-493a-83b3-2fd5f11bd4de 00:13:47.804 Node size: 16384 00:13:47.804 Sector size: 4096 (CPU page size: 4096) 00:13:47.804 Filesystem size: 510.00MiB 00:13:47.804 Block group profiles: 00:13:47.804 Data: single 8.00MiB 00:13:47.804 Metadata: DUP 32.00MiB 00:13:47.804 System: DUP 8.00MiB 00:13:47.804 SSD detected: yes 00:13:47.804 Zoned device: no 00:13:47.804 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:47.804 Checksum: crc32c 00:13:47.804 Number of devices: 1 00:13:47.804 Devices: 00:13:47.804 ID SIZE PATH 00:13:47.804 1 510.00MiB /dev/nvme0n1p1 00:13:47.804 00:13:47.804 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:47.804 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1342226 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:48.064 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:48.065 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:48.065 00:13:48.065 real 0m0.709s 00:13:48.065 user 0m0.033s 00:13:48.065 sys 0m0.115s 00:13:48.065 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.065 07:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:48.065 ************************************ 00:13:48.065 END TEST filesystem_in_capsule_btrfs 00:13:48.065 ************************************ 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:48.065 ************************************ 00:13:48.065 START TEST filesystem_in_capsule_xfs 00:13:48.065 ************************************ 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:48.065 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:48.065 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:48.065 = sectsz=512 attr=2, projid32bit=1 00:13:48.065 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:48.065 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:48.065 data = bsize=4096 blocks=130560, imaxpct=25 00:13:48.065 = sunit=0 swidth=0 blks 00:13:48.065 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:48.065 log =internal log bsize=4096 blocks=16384, version=2 00:13:48.065 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:48.065 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:49.006 Discarding blocks...Done. 00:13:49.006 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:49.007 07:23:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:51.098 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1342226 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:51.099 00:13:51.099 real 0m2.717s 00:13:51.099 user 0m0.032s 00:13:51.099 sys 0m0.075s 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:51.099 ************************************ 00:13:51.099 END TEST filesystem_in_capsule_xfs 00:13:51.099 ************************************ 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:51.099 07:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.388 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1342226 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1342226 ']' 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1342226 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:51.389 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1342226 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1342226' 00:13:51.684 killing process with pid 1342226 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1342226 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1342226 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:51.684 00:13:51.684 real 0m18.810s 00:13:51.684 user 1m14.381s 00:13:51.684 sys 0m1.444s 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.684 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 ************************************ 00:13:51.684 END TEST nvmf_filesystem_in_capsule 00:13:51.684 ************************************ 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.946 rmmod nvme_tcp 00:13:51.946 rmmod nvme_fabrics 00:13:51.946 rmmod nvme_keyring 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.946 07:23:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.859 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.859 00:13:53.859 real 0m47.819s 00:13:53.859 user 2m30.166s 00:13:53.859 sys 0m8.915s 00:13:53.859 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.859 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.859 ************************************ 00:13:53.859 END TEST nvmf_filesystem 00:13:53.859 ************************************ 00:13:54.120 07:23:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:54.120 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.120 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.120 07:23:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.120 ************************************ 00:13:54.120 START TEST nvmf_target_discovery 00:13:54.120 ************************************ 00:13:54.120 07:23:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:54.120 * Looking for test storage... 00:13:54.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:54.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.120 --rc genhtml_branch_coverage=1 00:13:54.120 --rc genhtml_function_coverage=1 00:13:54.120 --rc genhtml_legend=1 00:13:54.120 --rc geninfo_all_blocks=1 00:13:54.120 --rc geninfo_unexecuted_blocks=1 00:13:54.120 00:13:54.120 ' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:54.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.120 --rc genhtml_branch_coverage=1 00:13:54.120 --rc genhtml_function_coverage=1 00:13:54.120 --rc genhtml_legend=1 00:13:54.120 --rc geninfo_all_blocks=1 00:13:54.120 --rc geninfo_unexecuted_blocks=1 00:13:54.120 00:13:54.120 ' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:54.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.120 --rc genhtml_branch_coverage=1 00:13:54.120 --rc genhtml_function_coverage=1 00:13:54.120 --rc genhtml_legend=1 00:13:54.120 --rc geninfo_all_blocks=1 00:13:54.120 --rc geninfo_unexecuted_blocks=1 00:13:54.120 00:13:54.120 ' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:54.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.120 --rc genhtml_branch_coverage=1 00:13:54.120 --rc genhtml_function_coverage=1 00:13:54.120 --rc genhtml_legend=1 00:13:54.120 --rc geninfo_all_blocks=1 00:13:54.120 --rc geninfo_unexecuted_blocks=1 00:13:54.120 00:13:54.120 ' 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.120 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.382 07:23:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.530 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.531 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:14:02.531 00:14:02.531 --- 10.0.0.2 ping statistics --- 00:14:02.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.531 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:14:02.531 00:14:02.531 --- 10.0.0.1 ping statistics --- 00:14:02.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.531 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1350172 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1350172 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1350172 ']' 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.531 07:23:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.531 [2024-11-26 07:23:29.823581] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:14:02.531 [2024-11-26 07:23:29.823648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.531 [2024-11-26 07:23:29.923791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.531 [2024-11-26 07:23:29.977782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.531 [2024-11-26 07:23:29.977832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.531 [2024-11-26 07:23:29.977841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.531 [2024-11-26 07:23:29.977848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.531 [2024-11-26 07:23:29.977854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.531 [2024-11-26 07:23:29.980079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.531 [2024-11-26 07:23:29.980232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.531 [2024-11-26 07:23:29.980449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.531 [2024-11-26 07:23:29.980450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 [2024-11-26 07:23:30.693407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.792 Null1 00:14:02.792 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 [2024-11-26 07:23:30.753855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 Null2 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 Null3 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 Null4 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.793 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.054 07:23:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:03.054 00:14:03.054 Discovery Log Number of Records 6, Generation counter 6 00:14:03.054 =====Discovery Log Entry 0====== 00:14:03.054 trtype: tcp 00:14:03.054 adrfam: ipv4 00:14:03.054 subtype: current discovery subsystem 00:14:03.054 treq: not required 00:14:03.054 portid: 0 00:14:03.054 trsvcid: 4420 00:14:03.054 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.054 traddr: 10.0.0.2 00:14:03.054 eflags: explicit discovery connections, duplicate discovery information 00:14:03.054 sectype: none 00:14:03.054 =====Discovery Log Entry 1====== 00:14:03.054 trtype: tcp 00:14:03.054 adrfam: ipv4 00:14:03.054 subtype: nvme subsystem 00:14:03.054 treq: not required 00:14:03.054 portid: 0 00:14:03.054 trsvcid: 4420 00:14:03.054 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:03.054 traddr: 10.0.0.2 00:14:03.054 eflags: none 00:14:03.054 sectype: none 00:14:03.054 =====Discovery Log Entry 2====== 00:14:03.054 trtype: tcp 00:14:03.054 adrfam: ipv4 00:14:03.054 subtype: nvme subsystem 00:14:03.054 treq: not required 00:14:03.054 portid: 0 00:14:03.054 trsvcid: 4420 00:14:03.054 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:03.054 traddr: 10.0.0.2 00:14:03.054 eflags: none 00:14:03.054 sectype: none 00:14:03.054 =====Discovery Log Entry 3====== 00:14:03.054 trtype: tcp 00:14:03.054 adrfam: ipv4 00:14:03.054 subtype: nvme subsystem 00:14:03.054 treq: not required 00:14:03.054 portid: 0 00:14:03.054 trsvcid: 4420 00:14:03.054 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:03.054 traddr: 10.0.0.2 00:14:03.054 eflags: none 00:14:03.054 sectype: none 00:14:03.054 =====Discovery Log Entry 4====== 00:14:03.054 trtype: tcp 00:14:03.054 adrfam: ipv4 00:14:03.054 subtype: nvme subsystem 00:14:03.054 treq: not required 00:14:03.054 portid: 0 00:14:03.054 trsvcid: 4420 00:14:03.055 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:03.055 traddr: 10.0.0.2 00:14:03.055 eflags: none 00:14:03.055 sectype: none 00:14:03.055 =====Discovery Log Entry 5====== 00:14:03.055 trtype: tcp 00:14:03.055 adrfam: ipv4 00:14:03.055 subtype: discovery subsystem referral 00:14:03.055 treq: not required 00:14:03.055 portid: 0 00:14:03.055 trsvcid: 4430 00:14:03.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.055 traddr: 10.0.0.2 00:14:03.055 eflags: none 00:14:03.055 sectype: none 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:03.055 Perform nvmf subsystem discovery via RPC 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.055 [ 00:14:03.055 { 00:14:03.055 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:03.055 "subtype": "Discovery", 00:14:03.055 "listen_addresses": [ 00:14:03.055 { 00:14:03.055 "trtype": "TCP", 00:14:03.055 "adrfam": "IPv4", 00:14:03.055 "traddr": "10.0.0.2", 00:14:03.055 "trsvcid": "4420" 00:14:03.055 } 00:14:03.055 ], 00:14:03.055 "allow_any_host": true, 00:14:03.055 "hosts": [] 00:14:03.055 }, 00:14:03.055 { 00:14:03.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.055 "subtype": "NVMe", 00:14:03.055 "listen_addresses": [ 00:14:03.055 { 00:14:03.055 "trtype": "TCP", 00:14:03.055 "adrfam": "IPv4", 00:14:03.055 "traddr": "10.0.0.2", 00:14:03.055 "trsvcid": "4420" 00:14:03.055 } 00:14:03.055 ], 00:14:03.055 "allow_any_host": true, 00:14:03.055 "hosts": [], 00:14:03.055 "serial_number": "SPDK00000000000001", 00:14:03.055 "model_number": "SPDK bdev Controller", 00:14:03.055 "max_namespaces": 32, 00:14:03.055 "min_cntlid": 1, 00:14:03.055 "max_cntlid": 65519, 00:14:03.055 "namespaces": [ 00:14:03.055 { 00:14:03.055 "nsid": 1, 00:14:03.055 "bdev_name": "Null1", 00:14:03.055 "name": "Null1", 00:14:03.055 "nguid": "16E83D5CCE61406791E00578E4F9351C", 00:14:03.055 "uuid": "16e83d5c-ce61-4067-91e0-0578e4f9351c" 00:14:03.055 } 00:14:03.055 ] 00:14:03.055 }, 00:14:03.055 { 00:14:03.055 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:03.055 "subtype": "NVMe", 00:14:03.055 "listen_addresses": [ 00:14:03.055 { 00:14:03.055 "trtype": "TCP", 00:14:03.055 "adrfam": "IPv4", 00:14:03.055 "traddr": "10.0.0.2", 00:14:03.055 "trsvcid": "4420" 00:14:03.055 } 00:14:03.055 ], 00:14:03.055 "allow_any_host": true, 00:14:03.055 "hosts": [], 00:14:03.055 "serial_number": "SPDK00000000000002", 00:14:03.055 "model_number": "SPDK bdev Controller", 00:14:03.055 "max_namespaces": 32, 00:14:03.055 "min_cntlid": 1, 00:14:03.055 "max_cntlid": 65519, 00:14:03.055 "namespaces": [ 00:14:03.055 { 00:14:03.055 "nsid": 1, 00:14:03.055 "bdev_name": "Null2", 00:14:03.055 "name": "Null2", 00:14:03.055 "nguid": "C8233054460C4E7DAF6CE140BC8E4911", 00:14:03.055 "uuid": "c8233054-460c-4e7d-af6c-e140bc8e4911" 00:14:03.055 } 00:14:03.055 ] 00:14:03.055 }, 00:14:03.055 { 00:14:03.055 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:03.055 "subtype": "NVMe", 00:14:03.055 "listen_addresses": [ 00:14:03.055 { 00:14:03.055 "trtype": "TCP", 00:14:03.055 "adrfam": "IPv4", 00:14:03.055 "traddr": "10.0.0.2", 00:14:03.055 "trsvcid": "4420" 00:14:03.055 } 00:14:03.055 ], 00:14:03.055 "allow_any_host": true, 00:14:03.055 "hosts": [], 00:14:03.055 "serial_number": "SPDK00000000000003", 00:14:03.055 "model_number": "SPDK bdev Controller", 00:14:03.055 "max_namespaces": 32, 00:14:03.055 "min_cntlid": 1, 00:14:03.055 "max_cntlid": 65519, 00:14:03.055 "namespaces": [ 00:14:03.055 { 00:14:03.055 "nsid": 1, 00:14:03.055 "bdev_name": "Null3", 00:14:03.055 "name": "Null3", 00:14:03.055 "nguid": "E5AD3BB5EBB64718B0E71D4C68745078", 00:14:03.055 "uuid": "e5ad3bb5-ebb6-4718-b0e7-1d4c68745078" 00:14:03.055 } 00:14:03.055 ] 00:14:03.055 }, 00:14:03.055 { 00:14:03.055 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:03.055 "subtype": "NVMe", 00:14:03.055 "listen_addresses": [ 00:14:03.055 { 00:14:03.055 "trtype": "TCP", 00:14:03.055 "adrfam": "IPv4", 00:14:03.055 "traddr": "10.0.0.2", 00:14:03.055 "trsvcid": "4420" 00:14:03.055 } 00:14:03.055 ], 00:14:03.055 "allow_any_host": true, 00:14:03.055 "hosts": [], 00:14:03.055 "serial_number": "SPDK00000000000004", 00:14:03.055 "model_number": "SPDK bdev Controller", 00:14:03.055 "max_namespaces": 32, 00:14:03.055 "min_cntlid": 1, 00:14:03.055 "max_cntlid": 65519, 00:14:03.055 "namespaces": [ 00:14:03.055 { 00:14:03.055 "nsid": 1, 00:14:03.055 "bdev_name": "Null4", 00:14:03.055 "name": "Null4", 00:14:03.055 "nguid": "C684A2CE015F4F14B5E54728B42588F2", 00:14:03.055 "uuid": "c684a2ce-015f-4f14-b5e5-4728b42588f2" 00:14:03.055 } 00:14:03.055 ] 00:14:03.055 } 00:14:03.055 ] 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.055 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.316 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.317 rmmod nvme_tcp 00:14:03.317 rmmod nvme_fabrics 00:14:03.317 rmmod nvme_keyring 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1350172 ']' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1350172 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1350172 ']' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1350172 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.317 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1350172 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1350172' 00:14:03.578 killing process with pid 1350172 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1350172 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1350172 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.578 07:23:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.121 00:14:06.121 real 0m11.680s 00:14:06.121 user 0m8.830s 00:14:06.121 sys 0m6.133s 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 ************************************ 00:14:06.121 END TEST nvmf_target_discovery 00:14:06.121 ************************************ 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 ************************************ 00:14:06.121 START TEST nvmf_referrals 00:14:06.121 ************************************ 00:14:06.121 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:06.121 * Looking for test storage... 00:14:06.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.122 --rc genhtml_branch_coverage=1 00:14:06.122 --rc genhtml_function_coverage=1 00:14:06.122 --rc genhtml_legend=1 00:14:06.122 --rc geninfo_all_blocks=1 00:14:06.122 --rc geninfo_unexecuted_blocks=1 00:14:06.122 00:14:06.122 ' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.122 --rc genhtml_branch_coverage=1 00:14:06.122 --rc genhtml_function_coverage=1 00:14:06.122 --rc genhtml_legend=1 00:14:06.122 --rc geninfo_all_blocks=1 00:14:06.122 --rc geninfo_unexecuted_blocks=1 00:14:06.122 00:14:06.122 ' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.122 --rc genhtml_branch_coverage=1 00:14:06.122 --rc genhtml_function_coverage=1 00:14:06.122 --rc genhtml_legend=1 00:14:06.122 --rc geninfo_all_blocks=1 00:14:06.122 --rc geninfo_unexecuted_blocks=1 00:14:06.122 00:14:06.122 ' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.122 --rc genhtml_branch_coverage=1 00:14:06.122 --rc genhtml_function_coverage=1 00:14:06.122 --rc genhtml_legend=1 00:14:06.122 --rc geninfo_all_blocks=1 00:14:06.122 --rc geninfo_unexecuted_blocks=1 00:14:06.122 00:14:06.122 ' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:06.122 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.123 07:23:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.123 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.123 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.123 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.123 07:23:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.266 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:14.267 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:14.267 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:14.267 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:14.267 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:14:14.267 00:14:14.267 --- 10.0.0.2 ping statistics --- 00:14:14.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.267 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:14:14.267 00:14:14.267 --- 10.0.0.1 ping statistics --- 00:14:14.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.267 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1354839 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1354839 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1354839 ']' 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.267 07:23:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.267 [2024-11-26 07:23:41.638839] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:14:14.267 [2024-11-26 07:23:41.638908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.267 [2024-11-26 07:23:41.741631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.267 [2024-11-26 07:23:41.794764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.267 [2024-11-26 07:23:41.794811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.268 [2024-11-26 07:23:41.794820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.268 [2024-11-26 07:23:41.794827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.268 [2024-11-26 07:23:41.794833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.268 [2024-11-26 07:23:41.796893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.268 [2024-11-26 07:23:41.797059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.268 [2024-11-26 07:23:41.797222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.268 [2024-11-26 07:23:41.797265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.529 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.529 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:14:14.529 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 [2024-11-26 07:23:42.513824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 [2024-11-26 07:23:42.530128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:14.530 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.792 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:15.055 07:23:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:15.316 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:15.576 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:15.836 07:23:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:16.098 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:16.359 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:16.359 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:16.359 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:16.359 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:16.359 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:16.360 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.621 rmmod nvme_tcp 00:14:16.621 rmmod nvme_fabrics 00:14:16.621 rmmod nvme_keyring 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1354839 ']' 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1354839 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1354839 ']' 00:14:16.621 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1354839 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1354839 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1354839' 00:14:16.881 killing process with pid 1354839 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1354839 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1354839 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.881 07:23:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.429 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:19.429 00:14:19.429 real 0m13.230s 00:14:19.429 user 0m15.546s 00:14:19.429 sys 0m6.619s 00:14:19.429 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.429 07:23:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:19.429 ************************************ 00:14:19.429 END TEST nvmf_referrals 00:14:19.429 ************************************ 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.429 ************************************ 00:14:19.429 START TEST nvmf_connect_disconnect 00:14:19.429 ************************************ 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:19.429 * Looking for test storage... 00:14:19.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.429 --rc genhtml_branch_coverage=1 00:14:19.429 --rc genhtml_function_coverage=1 00:14:19.429 --rc genhtml_legend=1 00:14:19.429 --rc geninfo_all_blocks=1 00:14:19.429 --rc geninfo_unexecuted_blocks=1 00:14:19.429 00:14:19.429 ' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.429 --rc genhtml_branch_coverage=1 00:14:19.429 --rc genhtml_function_coverage=1 00:14:19.429 --rc genhtml_legend=1 00:14:19.429 --rc geninfo_all_blocks=1 00:14:19.429 --rc geninfo_unexecuted_blocks=1 00:14:19.429 00:14:19.429 ' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.429 --rc genhtml_branch_coverage=1 00:14:19.429 --rc genhtml_function_coverage=1 00:14:19.429 --rc genhtml_legend=1 00:14:19.429 --rc geninfo_all_blocks=1 00:14:19.429 --rc geninfo_unexecuted_blocks=1 00:14:19.429 00:14:19.429 ' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.429 --rc genhtml_branch_coverage=1 00:14:19.429 --rc genhtml_function_coverage=1 00:14:19.429 --rc genhtml_legend=1 00:14:19.429 --rc geninfo_all_blocks=1 00:14:19.429 --rc geninfo_unexecuted_blocks=1 00:14:19.429 00:14:19.429 ' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.429 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.430 07:23:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:27.572 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:27.572 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:27.572 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:27.572 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.572 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:14:27.573 00:14:27.573 --- 10.0.0.2 ping statistics --- 00:14:27.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.573 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:14:27.573 00:14:27.573 --- 10.0.0.1 ping statistics --- 00:14:27.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.573 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1359695 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1359695 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1359695 ']' 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.573 07:23:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 [2024-11-26 07:23:54.951068] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:14:27.573 [2024-11-26 07:23:54.951139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.573 [2024-11-26 07:23:55.052212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.573 [2024-11-26 07:23:55.105516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.573 [2024-11-26 07:23:55.105569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.573 [2024-11-26 07:23:55.105578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.573 [2024-11-26 07:23:55.105586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.573 [2024-11-26 07:23:55.105592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.573 [2024-11-26 07:23:55.107622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.573 [2024-11-26 07:23:55.107782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.573 [2024-11-26 07:23:55.107944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.573 [2024-11-26 07:23:55.107945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 [2024-11-26 07:23:55.816770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.835 [2024-11-26 07:23:55.894735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:27.835 07:23:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:32.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:46.242 rmmod nvme_tcp 00:14:46.242 rmmod nvme_fabrics 00:14:46.242 rmmod nvme_keyring 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1359695 ']' 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1359695 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1359695 ']' 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1359695 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1359695 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1359695' 00:14:46.242 killing process with pid 1359695 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1359695 00:14:46.242 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1359695 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.503 07:24:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:49.048 00:14:49.048 real 0m29.464s 00:14:49.048 user 1m19.222s 00:14:49.048 sys 0m7.203s 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:49.048 ************************************ 00:14:49.048 END TEST nvmf_connect_disconnect 00:14:49.048 ************************************ 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.048 ************************************ 00:14:49.048 START TEST nvmf_multitarget 00:14:49.048 ************************************ 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:49.048 * Looking for test storage... 00:14:49.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.048 --rc genhtml_branch_coverage=1 00:14:49.048 --rc genhtml_function_coverage=1 00:14:49.048 --rc genhtml_legend=1 00:14:49.048 --rc geninfo_all_blocks=1 00:14:49.048 --rc geninfo_unexecuted_blocks=1 00:14:49.048 00:14:49.048 ' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.048 --rc genhtml_branch_coverage=1 00:14:49.048 --rc genhtml_function_coverage=1 00:14:49.048 --rc genhtml_legend=1 00:14:49.048 --rc geninfo_all_blocks=1 00:14:49.048 --rc geninfo_unexecuted_blocks=1 00:14:49.048 00:14:49.048 ' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.048 --rc genhtml_branch_coverage=1 00:14:49.048 --rc genhtml_function_coverage=1 00:14:49.048 --rc genhtml_legend=1 00:14:49.048 --rc geninfo_all_blocks=1 00:14:49.048 --rc geninfo_unexecuted_blocks=1 00:14:49.048 00:14:49.048 ' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:49.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:49.048 --rc genhtml_branch_coverage=1 00:14:49.048 --rc genhtml_function_coverage=1 00:14:49.048 --rc genhtml_legend=1 00:14:49.048 --rc geninfo_all_blocks=1 00:14:49.048 --rc geninfo_unexecuted_blocks=1 00:14:49.048 00:14:49.048 ' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.048 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:49.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:49.049 07:24:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:57.194 07:24:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:57.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:57.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:57.194 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:57.194 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:57.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:14:57.194 00:14:57.194 --- 10.0.0.2 ping statistics --- 00:14:57.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.194 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:14:57.194 00:14:57.194 --- 10.0.0.1 ping statistics --- 00:14:57.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.194 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1368311 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1368311 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1368311 ']' 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.194 07:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:57.194 [2024-11-26 07:24:24.422683] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:14:57.194 [2024-11-26 07:24:24.422761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.194 [2024-11-26 07:24:24.523055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.194 [2024-11-26 07:24:24.577389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.194 [2024-11-26 07:24:24.577436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.194 [2024-11-26 07:24:24.577444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.194 [2024-11-26 07:24:24.577452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.194 [2024-11-26 07:24:24.577458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.194 [2024-11-26 07:24:24.579429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.194 [2024-11-26 07:24:24.579598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.194 [2024-11-26 07:24:24.579761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.194 [2024-11-26 07:24:24.579761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.194 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.194 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:57.194 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.195 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.195 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:57.455 "nvmf_tgt_1" 00:14:57.455 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:57.716 "nvmf_tgt_2" 00:14:57.716 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:57.716 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:57.716 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:57.716 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:57.976 true 00:14:57.977 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:57.977 true 00:14:57.977 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:57.977 07:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:58.237 rmmod nvme_tcp 00:14:58.237 rmmod nvme_fabrics 00:14:58.237 rmmod nvme_keyring 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1368311 ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1368311 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1368311 ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1368311 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1368311 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1368311' 00:14:58.237 killing process with pid 1368311 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1368311 00:14:58.237 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1368311 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.498 07:24:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.411 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:00.411 00:15:00.411 real 0m11.882s 00:15:00.411 user 0m10.354s 00:15:00.411 sys 0m6.172s 00:15:00.411 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.411 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:00.411 ************************************ 00:15:00.411 END TEST nvmf_multitarget 00:15:00.411 ************************************ 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.672 ************************************ 00:15:00.672 START TEST nvmf_rpc 00:15:00.672 ************************************ 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:00.672 * Looking for test storage... 00:15:00.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.672 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.933 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.934 --rc genhtml_branch_coverage=1 00:15:00.934 --rc genhtml_function_coverage=1 00:15:00.934 --rc genhtml_legend=1 00:15:00.934 --rc geninfo_all_blocks=1 00:15:00.934 --rc geninfo_unexecuted_blocks=1 00:15:00.934 00:15:00.934 ' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.934 --rc genhtml_branch_coverage=1 00:15:00.934 --rc genhtml_function_coverage=1 00:15:00.934 --rc genhtml_legend=1 00:15:00.934 --rc geninfo_all_blocks=1 00:15:00.934 --rc geninfo_unexecuted_blocks=1 00:15:00.934 00:15:00.934 ' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.934 --rc genhtml_branch_coverage=1 00:15:00.934 --rc genhtml_function_coverage=1 00:15:00.934 --rc genhtml_legend=1 00:15:00.934 --rc geninfo_all_blocks=1 00:15:00.934 --rc geninfo_unexecuted_blocks=1 00:15:00.934 00:15:00.934 ' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:00.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.934 --rc genhtml_branch_coverage=1 00:15:00.934 --rc genhtml_function_coverage=1 00:15:00.934 --rc genhtml_legend=1 00:15:00.934 --rc geninfo_all_blocks=1 00:15:00.934 --rc geninfo_unexecuted_blocks=1 00:15:00.934 00:15:00.934 ' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:15:00.934 07:24:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:09.083 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:09.083 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:09.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.083 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:09.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:09.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:15:09.084 00:15:09.084 --- 10.0.0.2 ping statistics --- 00:15:09.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.084 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:15:09.084 00:15:09.084 --- 10.0.0.1 ping statistics --- 00:15:09.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.084 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1373019 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1373019 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1373019 ']' 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.084 07:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.084 [2024-11-26 07:24:36.468749] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:15:09.084 [2024-11-26 07:24:36.468827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.084 [2024-11-26 07:24:36.567683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.084 [2024-11-26 07:24:36.622721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.084 [2024-11-26 07:24:36.622771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.084 [2024-11-26 07:24:36.622780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.084 [2024-11-26 07:24:36.622788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.084 [2024-11-26 07:24:36.622794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.084 [2024-11-26 07:24:36.624911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.084 [2024-11-26 07:24:36.625072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.084 [2024-11-26 07:24:36.625230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.084 [2024-11-26 07:24:36.625256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:09.346 "tick_rate": 2400000000, 00:15:09.346 "poll_groups": [ 00:15:09.346 { 00:15:09.346 "name": "nvmf_tgt_poll_group_000", 00:15:09.346 "admin_qpairs": 0, 00:15:09.346 "io_qpairs": 0, 00:15:09.346 "current_admin_qpairs": 0, 00:15:09.346 "current_io_qpairs": 0, 00:15:09.346 "pending_bdev_io": 0, 00:15:09.346 "completed_nvme_io": 0, 00:15:09.346 "transports": [] 00:15:09.346 }, 00:15:09.346 { 00:15:09.346 "name": "nvmf_tgt_poll_group_001", 00:15:09.346 "admin_qpairs": 0, 00:15:09.346 "io_qpairs": 0, 00:15:09.346 "current_admin_qpairs": 0, 00:15:09.346 "current_io_qpairs": 0, 00:15:09.346 "pending_bdev_io": 0, 00:15:09.346 "completed_nvme_io": 0, 00:15:09.346 "transports": [] 00:15:09.346 }, 00:15:09.346 { 00:15:09.346 "name": "nvmf_tgt_poll_group_002", 00:15:09.346 "admin_qpairs": 0, 00:15:09.346 "io_qpairs": 0, 00:15:09.346 "current_admin_qpairs": 0, 00:15:09.346 "current_io_qpairs": 0, 00:15:09.346 "pending_bdev_io": 0, 00:15:09.346 "completed_nvme_io": 0, 00:15:09.346 "transports": [] 00:15:09.346 }, 00:15:09.346 { 00:15:09.346 "name": "nvmf_tgt_poll_group_003", 00:15:09.346 "admin_qpairs": 0, 00:15:09.346 "io_qpairs": 0, 00:15:09.346 "current_admin_qpairs": 0, 00:15:09.346 "current_io_qpairs": 0, 00:15:09.346 "pending_bdev_io": 0, 00:15:09.346 "completed_nvme_io": 0, 00:15:09.346 "transports": [] 00:15:09.346 } 00:15:09.346 ] 00:15:09.346 }' 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:09.346 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:09.606 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:09.606 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 [2024-11-26 07:24:37.450851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:09.607 "tick_rate": 2400000000, 00:15:09.607 "poll_groups": [ 00:15:09.607 { 00:15:09.607 "name": "nvmf_tgt_poll_group_000", 00:15:09.607 "admin_qpairs": 0, 00:15:09.607 "io_qpairs": 0, 00:15:09.607 "current_admin_qpairs": 0, 00:15:09.607 "current_io_qpairs": 0, 00:15:09.607 "pending_bdev_io": 0, 00:15:09.607 "completed_nvme_io": 0, 00:15:09.607 "transports": [ 00:15:09.607 { 00:15:09.607 "trtype": "TCP" 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 }, 00:15:09.607 { 00:15:09.607 "name": "nvmf_tgt_poll_group_001", 00:15:09.607 "admin_qpairs": 0, 00:15:09.607 "io_qpairs": 0, 00:15:09.607 "current_admin_qpairs": 0, 00:15:09.607 "current_io_qpairs": 0, 00:15:09.607 "pending_bdev_io": 0, 00:15:09.607 "completed_nvme_io": 0, 00:15:09.607 "transports": [ 00:15:09.607 { 00:15:09.607 "trtype": "TCP" 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 }, 00:15:09.607 { 00:15:09.607 "name": "nvmf_tgt_poll_group_002", 00:15:09.607 "admin_qpairs": 0, 00:15:09.607 "io_qpairs": 0, 00:15:09.607 "current_admin_qpairs": 0, 00:15:09.607 "current_io_qpairs": 0, 00:15:09.607 "pending_bdev_io": 0, 00:15:09.607 "completed_nvme_io": 0, 00:15:09.607 "transports": [ 00:15:09.607 { 00:15:09.607 "trtype": "TCP" 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 }, 00:15:09.607 { 00:15:09.607 "name": "nvmf_tgt_poll_group_003", 00:15:09.607 "admin_qpairs": 0, 00:15:09.607 "io_qpairs": 0, 00:15:09.607 "current_admin_qpairs": 0, 00:15:09.607 "current_io_qpairs": 0, 00:15:09.607 "pending_bdev_io": 0, 00:15:09.607 "completed_nvme_io": 0, 00:15:09.607 "transports": [ 00:15:09.607 { 00:15:09.607 "trtype": "TCP" 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 } 00:15:09.607 ] 00:15:09.607 }' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 Malloc1 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 [2024-11-26 07:24:37.659879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:09.607 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:15:09.607 [2024-11-26 07:24:37.696939] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:09.868 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:09.868 could not add new controller: failed to write to nvme-fabrics device 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.868 07:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.256 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.256 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:11.256 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.256 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:11.256 07:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:13.170 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:13.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.432 [2024-11-26 07:24:41.434220] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:15:13.432 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:13.432 could not add new controller: failed to write to nvme-fabrics device 00:15:13.432 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.433 07:24:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:15.349 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.349 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:15.349 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.349 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:15.349 07:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 [2024-11-26 07:24:45.200195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.264 07:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:19.180 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.180 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:19.180 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.180 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:19.180 07:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:21.092 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:21.092 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:21.092 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.092 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:21.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 [2024-11-26 07:24:48.967430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.093 07:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.475 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.475 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.475 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.475 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:22.475 07:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:24.389 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 [2024-11-26 07:24:52.685771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.650 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:24.651 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.651 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:24.651 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.651 07:24:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.562 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.562 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:26.562 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.562 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:26.562 07:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.474 [2024-11-26 07:24:56.551818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.474 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.735 07:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:30.117 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:30.117 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:30.117 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:30.117 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:30.117 07:24:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:32.033 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 [2024-11-26 07:25:00.285699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:32.294 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.295 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:32.295 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.295 07:25:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:34.208 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:34.208 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:34.208 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:34.208 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:34.208 07:25:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:36.121 07:25:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 [2024-11-26 07:25:04.055327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 [2024-11-26 07:25:04.119476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.121 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.122 [2024-11-26 07:25:04.187675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.122 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 [2024-11-26 07:25:04.259897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 [2024-11-26 07:25:04.328113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:36.385 "tick_rate": 2400000000, 00:15:36.385 "poll_groups": [ 00:15:36.385 { 00:15:36.385 "name": "nvmf_tgt_poll_group_000", 00:15:36.385 "admin_qpairs": 0, 00:15:36.385 "io_qpairs": 224, 00:15:36.385 "current_admin_qpairs": 0, 00:15:36.385 "current_io_qpairs": 0, 00:15:36.385 "pending_bdev_io": 0, 00:15:36.385 "completed_nvme_io": 463, 00:15:36.385 "transports": [ 00:15:36.385 { 00:15:36.385 "trtype": "TCP" 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 }, 00:15:36.385 { 00:15:36.385 "name": "nvmf_tgt_poll_group_001", 00:15:36.385 "admin_qpairs": 1, 00:15:36.385 "io_qpairs": 223, 00:15:36.385 "current_admin_qpairs": 0, 00:15:36.385 "current_io_qpairs": 0, 00:15:36.385 "pending_bdev_io": 0, 00:15:36.385 "completed_nvme_io": 322, 00:15:36.385 "transports": [ 00:15:36.385 { 00:15:36.385 "trtype": "TCP" 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 }, 00:15:36.385 { 00:15:36.385 "name": "nvmf_tgt_poll_group_002", 00:15:36.385 "admin_qpairs": 6, 00:15:36.385 "io_qpairs": 218, 00:15:36.385 "current_admin_qpairs": 0, 00:15:36.385 "current_io_qpairs": 0, 00:15:36.385 "pending_bdev_io": 0, 00:15:36.385 "completed_nvme_io": 223, 00:15:36.385 "transports": [ 00:15:36.385 { 00:15:36.385 "trtype": "TCP" 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 }, 00:15:36.385 { 00:15:36.385 "name": "nvmf_tgt_poll_group_003", 00:15:36.385 "admin_qpairs": 0, 00:15:36.385 "io_qpairs": 224, 00:15:36.385 "current_admin_qpairs": 0, 00:15:36.385 "current_io_qpairs": 0, 00:15:36.385 "pending_bdev_io": 0, 00:15:36.385 "completed_nvme_io": 231, 00:15:36.385 "transports": [ 00:15:36.385 { 00:15:36.385 "trtype": "TCP" 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 } 00:15:36.385 ] 00:15:36.385 }' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:36.385 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:36.386 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:36.386 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:36.646 rmmod nvme_tcp 00:15:36.646 rmmod nvme_fabrics 00:15:36.646 rmmod nvme_keyring 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1373019 ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1373019 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1373019 ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1373019 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1373019 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1373019' 00:15:36.646 killing process with pid 1373019 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1373019 00:15:36.646 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1373019 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.907 07:25:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:38.822 00:15:38.822 real 0m38.249s 00:15:38.822 user 1m54.406s 00:15:38.822 sys 0m8.041s 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:38.822 ************************************ 00:15:38.822 END TEST nvmf_rpc 00:15:38.822 ************************************ 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.822 ************************************ 00:15:38.822 START TEST nvmf_invalid 00:15:38.822 ************************************ 00:15:38.822 07:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:39.084 * Looking for test storage... 00:15:39.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.084 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.085 --rc genhtml_branch_coverage=1 00:15:39.085 --rc genhtml_function_coverage=1 00:15:39.085 --rc genhtml_legend=1 00:15:39.085 --rc geninfo_all_blocks=1 00:15:39.085 --rc geninfo_unexecuted_blocks=1 00:15:39.085 00:15:39.085 ' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.085 --rc genhtml_branch_coverage=1 00:15:39.085 --rc genhtml_function_coverage=1 00:15:39.085 --rc genhtml_legend=1 00:15:39.085 --rc geninfo_all_blocks=1 00:15:39.085 --rc geninfo_unexecuted_blocks=1 00:15:39.085 00:15:39.085 ' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.085 --rc genhtml_branch_coverage=1 00:15:39.085 --rc genhtml_function_coverage=1 00:15:39.085 --rc genhtml_legend=1 00:15:39.085 --rc geninfo_all_blocks=1 00:15:39.085 --rc geninfo_unexecuted_blocks=1 00:15:39.085 00:15:39.085 ' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.085 --rc genhtml_branch_coverage=1 00:15:39.085 --rc genhtml_function_coverage=1 00:15:39.085 --rc genhtml_legend=1 00:15:39.085 --rc geninfo_all_blocks=1 00:15:39.085 --rc geninfo_unexecuted_blocks=1 00:15:39.085 00:15:39.085 ' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:39.085 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:39.086 07:25:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:47.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:47.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:47.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:47.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:47.233 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:47.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:15:47.234 00:15:47.234 --- 10.0.0.2 ping statistics --- 00:15:47.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.234 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:15:47.234 00:15:47.234 --- 10.0.0.1 ping statistics --- 00:15:47.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.234 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1382884 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1382884 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1382884 ']' 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.234 07:25:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.234 [2024-11-26 07:25:14.679988] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:15:47.234 [2024-11-26 07:25:14.680054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.234 [2024-11-26 07:25:14.779711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.234 [2024-11-26 07:25:14.833470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.234 [2024-11-26 07:25:14.833524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.234 [2024-11-26 07:25:14.833533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.234 [2024-11-26 07:25:14.833541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.234 [2024-11-26 07:25:14.833548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.234 [2024-11-26 07:25:14.835603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.234 [2024-11-26 07:25:14.835767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.234 [2024-11-26 07:25:14.835928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.234 [2024-11-26 07:25:14.835929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:47.495 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19993 00:15:47.755 [2024-11-26 07:25:15.720693] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:47.755 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:47.755 { 00:15:47.755 "nqn": "nqn.2016-06.io.spdk:cnode19993", 00:15:47.755 "tgt_name": "foobar", 00:15:47.755 "method": "nvmf_create_subsystem", 00:15:47.755 "req_id": 1 00:15:47.755 } 00:15:47.755 Got JSON-RPC error response 00:15:47.755 response: 00:15:47.755 { 00:15:47.755 "code": -32603, 00:15:47.755 "message": "Unable to find target foobar" 00:15:47.755 }' 00:15:47.755 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:47.755 { 00:15:47.755 "nqn": "nqn.2016-06.io.spdk:cnode19993", 00:15:47.755 "tgt_name": "foobar", 00:15:47.755 "method": "nvmf_create_subsystem", 00:15:47.755 "req_id": 1 00:15:47.755 } 00:15:47.755 Got JSON-RPC error response 00:15:47.755 response: 00:15:47.755 { 00:15:47.755 "code": -32603, 00:15:47.755 "message": "Unable to find target foobar" 00:15:47.756 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:47.756 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:47.756 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6043 00:15:48.016 [2024-11-26 07:25:15.929571] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6043: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:48.016 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:48.016 { 00:15:48.016 "nqn": "nqn.2016-06.io.spdk:cnode6043", 00:15:48.016 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.016 "method": "nvmf_create_subsystem", 00:15:48.016 "req_id": 1 00:15:48.016 } 00:15:48.016 Got JSON-RPC error response 00:15:48.016 response: 00:15:48.016 { 00:15:48.016 "code": -32602, 00:15:48.016 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.016 }' 00:15:48.016 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:48.016 { 00:15:48.016 "nqn": "nqn.2016-06.io.spdk:cnode6043", 00:15:48.016 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.016 "method": "nvmf_create_subsystem", 00:15:48.016 "req_id": 1 00:15:48.016 } 00:15:48.016 Got JSON-RPC error response 00:15:48.016 response: 00:15:48.016 { 00:15:48.016 "code": -32602, 00:15:48.016 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.016 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.016 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:48.016 07:25:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32416 00:15:48.278 [2024-11-26 07:25:16.138308] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32416: invalid model number 'SPDK_Controller' 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:48.278 { 00:15:48.278 "nqn": "nqn.2016-06.io.spdk:cnode32416", 00:15:48.278 "model_number": "SPDK_Controller\u001f", 00:15:48.278 "method": "nvmf_create_subsystem", 00:15:48.278 "req_id": 1 00:15:48.278 } 00:15:48.278 Got JSON-RPC error response 00:15:48.278 response: 00:15:48.278 { 00:15:48.278 "code": -32602, 00:15:48.278 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.278 }' 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:48.278 { 00:15:48.278 "nqn": "nqn.2016-06.io.spdk:cnode32416", 00:15:48.278 "model_number": "SPDK_Controller\u001f", 00:15:48.278 "method": "nvmf_create_subsystem", 00:15:48.278 "req_id": 1 00:15:48.278 } 00:15:48.278 Got JSON-RPC error response 00:15:48.278 response: 00:15:48.278 { 00:15:48.278 "code": -32602, 00:15:48.278 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.278 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.278 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&.!5@4uWP 6yKm18UGpkI' 00:15:48.279 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '&.!5@4uWP 6yKm18UGpkI' nqn.2016-06.io.spdk:cnode27442 00:15:48.541 [2024-11-26 07:25:16.523862] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27442: invalid serial number '&.!5@4uWP 6yKm18UGpkI' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:48.541 { 00:15:48.541 "nqn": "nqn.2016-06.io.spdk:cnode27442", 00:15:48.541 "serial_number": "&.!5@4uWP 6yKm18UGpkI", 00:15:48.541 "method": "nvmf_create_subsystem", 00:15:48.541 "req_id": 1 00:15:48.541 } 00:15:48.541 Got JSON-RPC error response 00:15:48.541 response: 00:15:48.541 { 00:15:48.541 "code": -32602, 00:15:48.541 "message": "Invalid SN &.!5@4uWP 6yKm18UGpkI" 00:15:48.541 }' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:48.541 { 00:15:48.541 "nqn": "nqn.2016-06.io.spdk:cnode27442", 00:15:48.541 "serial_number": "&.!5@4uWP 6yKm18UGpkI", 00:15:48.541 "method": "nvmf_create_subsystem", 00:15:48.541 "req_id": 1 00:15:48.541 } 00:15:48.541 Got JSON-RPC error response 00:15:48.541 response: 00:15:48.541 { 00:15:48.541 "code": -32602, 00:15:48.541 "message": "Invalid SN &.!5@4uWP 6yKm18UGpkI" 00:15:48.541 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.541 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.804 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.805 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '[]-FQlqn'\''@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu' 00:15:49.066 07:25:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[]-FQlqn'\''@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu' nqn.2016-06.io.spdk:cnode5146 00:15:49.066 [2024-11-26 07:25:17.065967] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5146: invalid model number '[]-FQlqn'@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu' 00:15:49.066 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:49.066 { 00:15:49.066 "nqn": "nqn.2016-06.io.spdk:cnode5146", 00:15:49.066 "model_number": "[]-FQlqn'\''@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu", 00:15:49.066 "method": "nvmf_create_subsystem", 00:15:49.066 "req_id": 1 00:15:49.067 } 00:15:49.067 Got JSON-RPC error response 00:15:49.067 response: 00:15:49.067 { 00:15:49.067 "code": -32602, 00:15:49.067 "message": "Invalid MN []-FQlqn'\''@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu" 00:15:49.067 }' 00:15:49.067 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:49.067 { 00:15:49.067 "nqn": "nqn.2016-06.io.spdk:cnode5146", 00:15:49.067 "model_number": "[]-FQlqn'@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu", 00:15:49.067 "method": "nvmf_create_subsystem", 00:15:49.067 "req_id": 1 00:15:49.067 } 00:15:49.067 Got JSON-RPC error response 00:15:49.067 response: 00:15:49.067 { 00:15:49.067 "code": -32602, 00:15:49.067 "message": "Invalid MN []-FQlqn'@SfMRn4YioA*ux5r$N?`-*MOS#HGC`Pu" 00:15:49.067 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:49.067 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:49.327 [2024-11-26 07:25:17.270920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.327 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:49.588 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:49.588 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:49.588 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:49.588 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:49.588 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:49.588 [2024-11-26 07:25:17.668213] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:49.849 { 00:15:49.849 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:49.849 "listen_address": { 00:15:49.849 "trtype": "tcp", 00:15:49.849 "traddr": "", 00:15:49.849 "trsvcid": "4421" 00:15:49.849 }, 00:15:49.849 "method": "nvmf_subsystem_remove_listener", 00:15:49.849 "req_id": 1 00:15:49.849 } 00:15:49.849 Got JSON-RPC error response 00:15:49.849 response: 00:15:49.849 { 00:15:49.849 "code": -32602, 00:15:49.849 "message": "Invalid parameters" 00:15:49.849 }' 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:49.849 { 00:15:49.849 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:49.849 "listen_address": { 00:15:49.849 "trtype": "tcp", 00:15:49.849 "traddr": "", 00:15:49.849 "trsvcid": "4421" 00:15:49.849 }, 00:15:49.849 "method": "nvmf_subsystem_remove_listener", 00:15:49.849 "req_id": 1 00:15:49.849 } 00:15:49.849 Got JSON-RPC error response 00:15:49.849 response: 00:15:49.849 { 00:15:49.849 "code": -32602, 00:15:49.849 "message": "Invalid parameters" 00:15:49.849 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6874 -i 0 00:15:49.849 [2024-11-26 07:25:17.856816] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6874: invalid cntlid range [0-65519] 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:49.849 { 00:15:49.849 "nqn": "nqn.2016-06.io.spdk:cnode6874", 00:15:49.849 "min_cntlid": 0, 00:15:49.849 "method": "nvmf_create_subsystem", 00:15:49.849 "req_id": 1 00:15:49.849 } 00:15:49.849 Got JSON-RPC error response 00:15:49.849 response: 00:15:49.849 { 00:15:49.849 "code": -32602, 00:15:49.849 "message": "Invalid cntlid range [0-65519]" 00:15:49.849 }' 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:49.849 { 00:15:49.849 "nqn": "nqn.2016-06.io.spdk:cnode6874", 00:15:49.849 "min_cntlid": 0, 00:15:49.849 "method": "nvmf_create_subsystem", 00:15:49.849 "req_id": 1 00:15:49.849 } 00:15:49.849 Got JSON-RPC error response 00:15:49.849 response: 00:15:49.849 { 00:15:49.849 "code": -32602, 00:15:49.849 "message": "Invalid cntlid range [0-65519]" 00:15:49.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:49.849 07:25:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10051 -i 65520 00:15:50.111 [2024-11-26 07:25:18.045374] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10051: invalid cntlid range [65520-65519] 00:15:50.111 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:50.111 { 00:15:50.111 "nqn": "nqn.2016-06.io.spdk:cnode10051", 00:15:50.112 "min_cntlid": 65520, 00:15:50.112 "method": "nvmf_create_subsystem", 00:15:50.112 "req_id": 1 00:15:50.112 } 00:15:50.112 Got JSON-RPC error response 00:15:50.112 response: 00:15:50.112 { 00:15:50.112 "code": -32602, 00:15:50.112 "message": "Invalid cntlid range [65520-65519]" 00:15:50.112 }' 00:15:50.112 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:50.112 { 00:15:50.112 "nqn": "nqn.2016-06.io.spdk:cnode10051", 00:15:50.112 "min_cntlid": 65520, 00:15:50.112 "method": "nvmf_create_subsystem", 00:15:50.112 "req_id": 1 00:15:50.112 } 00:15:50.112 Got JSON-RPC error response 00:15:50.112 response: 00:15:50.112 { 00:15:50.112 "code": -32602, 00:15:50.112 "message": "Invalid cntlid range [65520-65519]" 00:15:50.112 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.112 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9746 -I 0 00:15:50.373 [2024-11-26 07:25:18.233993] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9746: invalid cntlid range [1-0] 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:50.373 { 00:15:50.373 "nqn": "nqn.2016-06.io.spdk:cnode9746", 00:15:50.373 "max_cntlid": 0, 00:15:50.373 "method": "nvmf_create_subsystem", 00:15:50.373 "req_id": 1 00:15:50.373 } 00:15:50.373 Got JSON-RPC error response 00:15:50.373 response: 00:15:50.373 { 00:15:50.373 "code": -32602, 00:15:50.373 "message": "Invalid cntlid range [1-0]" 00:15:50.373 }' 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:50.373 { 00:15:50.373 "nqn": "nqn.2016-06.io.spdk:cnode9746", 00:15:50.373 "max_cntlid": 0, 00:15:50.373 "method": "nvmf_create_subsystem", 00:15:50.373 "req_id": 1 00:15:50.373 } 00:15:50.373 Got JSON-RPC error response 00:15:50.373 response: 00:15:50.373 { 00:15:50.373 "code": -32602, 00:15:50.373 "message": "Invalid cntlid range [1-0]" 00:15:50.373 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24649 -I 65520 00:15:50.373 [2024-11-26 07:25:18.422610] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24649: invalid cntlid range [1-65520] 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:50.373 { 00:15:50.373 "nqn": "nqn.2016-06.io.spdk:cnode24649", 00:15:50.373 "max_cntlid": 65520, 00:15:50.373 "method": "nvmf_create_subsystem", 00:15:50.373 "req_id": 1 00:15:50.373 } 00:15:50.373 Got JSON-RPC error response 00:15:50.373 response: 00:15:50.373 { 00:15:50.373 "code": -32602, 00:15:50.373 "message": "Invalid cntlid range [1-65520]" 00:15:50.373 }' 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:50.373 { 00:15:50.373 "nqn": "nqn.2016-06.io.spdk:cnode24649", 00:15:50.373 "max_cntlid": 65520, 00:15:50.373 "method": "nvmf_create_subsystem", 00:15:50.373 "req_id": 1 00:15:50.373 } 00:15:50.373 Got JSON-RPC error response 00:15:50.373 response: 00:15:50.373 { 00:15:50.373 "code": -32602, 00:15:50.373 "message": "Invalid cntlid range [1-65520]" 00:15:50.373 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.373 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23314 -i 6 -I 5 00:15:50.633 [2024-11-26 07:25:18.603195] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23314: invalid cntlid range [6-5] 00:15:50.633 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:50.633 { 00:15:50.633 "nqn": "nqn.2016-06.io.spdk:cnode23314", 00:15:50.633 "min_cntlid": 6, 00:15:50.633 "max_cntlid": 5, 00:15:50.633 "method": "nvmf_create_subsystem", 00:15:50.633 "req_id": 1 00:15:50.633 } 00:15:50.633 Got JSON-RPC error response 00:15:50.633 response: 00:15:50.633 { 00:15:50.633 "code": -32602, 00:15:50.633 "message": "Invalid cntlid range [6-5]" 00:15:50.633 }' 00:15:50.634 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:50.634 { 00:15:50.634 "nqn": "nqn.2016-06.io.spdk:cnode23314", 00:15:50.634 "min_cntlid": 6, 00:15:50.634 "max_cntlid": 5, 00:15:50.634 "method": "nvmf_create_subsystem", 00:15:50.634 "req_id": 1 00:15:50.634 } 00:15:50.634 Got JSON-RPC error response 00:15:50.634 response: 00:15:50.634 { 00:15:50.634 "code": -32602, 00:15:50.634 "message": "Invalid cntlid range [6-5]" 00:15:50.634 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.634 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:50.895 { 00:15:50.895 "name": "foobar", 00:15:50.895 "method": "nvmf_delete_target", 00:15:50.895 "req_id": 1 00:15:50.895 } 00:15:50.895 Got JSON-RPC error response 00:15:50.895 response: 00:15:50.895 { 00:15:50.895 "code": -32602, 00:15:50.895 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:50.895 }' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:50.895 { 00:15:50.895 "name": "foobar", 00:15:50.895 "method": "nvmf_delete_target", 00:15:50.895 "req_id": 1 00:15:50.895 } 00:15:50.895 Got JSON-RPC error response 00:15:50.895 response: 00:15:50.895 { 00:15:50.895 "code": -32602, 00:15:50.895 "message": "The specified target doesn't exist, cannot delete it." 00:15:50.895 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.895 rmmod nvme_tcp 00:15:50.895 rmmod nvme_fabrics 00:15:50.895 rmmod nvme_keyring 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1382884 ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1382884 ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382884' 00:15:50.895 killing process with pid 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1382884 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:15:50.895 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.156 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.156 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.156 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.156 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.156 07:25:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.145 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:53.145 00:15:53.145 real 0m14.158s 00:15:53.145 user 0m21.246s 00:15:53.145 sys 0m6.708s 00:15:53.145 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.145 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:53.145 ************************************ 00:15:53.145 END TEST nvmf_invalid 00:15:53.146 ************************************ 00:15:53.146 07:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.146 07:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.146 07:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.146 07:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.146 ************************************ 00:15:53.146 START TEST nvmf_connect_stress 00:15:53.146 ************************************ 00:15:53.146 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:53.408 * Looking for test storage... 00:15:53.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.408 --rc genhtml_branch_coverage=1 00:15:53.408 --rc genhtml_function_coverage=1 00:15:53.408 --rc genhtml_legend=1 00:15:53.408 --rc geninfo_all_blocks=1 00:15:53.408 --rc geninfo_unexecuted_blocks=1 00:15:53.408 00:15:53.408 ' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.408 --rc genhtml_branch_coverage=1 00:15:53.408 --rc genhtml_function_coverage=1 00:15:53.408 --rc genhtml_legend=1 00:15:53.408 --rc geninfo_all_blocks=1 00:15:53.408 --rc geninfo_unexecuted_blocks=1 00:15:53.408 00:15:53.408 ' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.408 --rc genhtml_branch_coverage=1 00:15:53.408 --rc genhtml_function_coverage=1 00:15:53.408 --rc genhtml_legend=1 00:15:53.408 --rc geninfo_all_blocks=1 00:15:53.408 --rc geninfo_unexecuted_blocks=1 00:15:53.408 00:15:53.408 ' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.408 --rc genhtml_branch_coverage=1 00:15:53.408 --rc genhtml_function_coverage=1 00:15:53.408 --rc genhtml_legend=1 00:15:53.408 --rc geninfo_all_blocks=1 00:15:53.408 --rc geninfo_unexecuted_blocks=1 00:15:53.408 00:15:53.408 ' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.408 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.409 07:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:01.559 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:01.560 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:01.560 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:01.560 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:01.560 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:01.560 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:01.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:16:01.560 00:16:01.560 --- 10.0.0.2 ping statistics --- 00:16:01.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.560 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:16:01.561 00:16:01.561 --- 10.0.0.1 ping statistics --- 00:16:01.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.561 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1388062 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1388062 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1388062 ']' 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.561 07:25:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.561 [2024-11-26 07:25:28.947404] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:16:01.561 [2024-11-26 07:25:28.947509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.561 [2024-11-26 07:25:29.049864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:01.561 [2024-11-26 07:25:29.101449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.561 [2024-11-26 07:25:29.101500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.561 [2024-11-26 07:25:29.101509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.561 [2024-11-26 07:25:29.101516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.561 [2024-11-26 07:25:29.101523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.561 [2024-11-26 07:25:29.103303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.561 [2024-11-26 07:25:29.103568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.561 [2024-11-26 07:25:29.103570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 [2024-11-26 07:25:29.828581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 [2024-11-26 07:25:29.854175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:01.823 NULL1 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1388114 00:16:01.823 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:01.824 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:02.085 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:02.086 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:02.086 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.086 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.086 07:25:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.348 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.348 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:02.348 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.348 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.348 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.611 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.611 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:02.611 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.611 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.611 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:02.872 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.872 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:02.872 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:02.872 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.872 07:25:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.445 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.445 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:03.445 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.445 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.445 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.706 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.706 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:03.706 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.706 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.706 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:03.966 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.966 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:03.967 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:03.967 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.967 07:25:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.227 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.227 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:04.227 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.227 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.227 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:04.799 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.799 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:04.799 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:04.799 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.799 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.060 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.060 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:05.060 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.060 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.060 07:25:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.321 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.321 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:05.321 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.321 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.321 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.581 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.581 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:05.581 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.581 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.581 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:05.841 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.841 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:05.841 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:05.841 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.841 07:25:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.413 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.413 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:06.413 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.413 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.413 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.673 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.673 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:06.673 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.673 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.673 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:06.933 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.933 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:06.933 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:06.933 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.933 07:25:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.194 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.194 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:07.194 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.194 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.194 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:07.454 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.454 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:07.454 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:07.454 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.454 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.024 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.024 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:08.024 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.024 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.024 07:25:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.284 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.284 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:08.284 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.284 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.284 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.543 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.543 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:08.543 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.543 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.544 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:08.804 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.804 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:08.804 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:08.804 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.804 07:25:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.064 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.065 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:09.065 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.065 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.065 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.636 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.636 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:09.636 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.636 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.636 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.895 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.895 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:09.895 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:09.895 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.895 07:25:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.155 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.155 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:10.155 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.155 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.155 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.415 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.415 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:10.415 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.415 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.415 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.675 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.675 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:10.675 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.675 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.675 07:25:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.245 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.245 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:11.245 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.245 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.245 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.504 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.504 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:11.504 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.504 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.504 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.764 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.764 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:11.764 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.764 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.764 07:25:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.025 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1388114 00:16:12.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1388114) - No such process 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1388114 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.025 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.025 rmmod nvme_tcp 00:16:12.025 rmmod nvme_fabrics 00:16:12.025 rmmod nvme_keyring 00:16:12.285 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.285 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:16:12.285 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:16:12.285 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1388062 ']' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1388062 ']' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1388062' 00:16:12.286 killing process with pid 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1388062 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.286 07:25:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:14.835 00:16:14.835 real 0m21.243s 00:16:14.835 user 0m42.121s 00:16:14.835 sys 0m9.359s 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.835 ************************************ 00:16:14.835 END TEST nvmf_connect_stress 00:16:14.835 ************************************ 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.835 ************************************ 00:16:14.835 START TEST nvmf_fused_ordering 00:16:14.835 ************************************ 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:14.835 * Looking for test storage... 00:16:14.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.835 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:14.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.836 --rc genhtml_branch_coverage=1 00:16:14.836 --rc genhtml_function_coverage=1 00:16:14.836 --rc genhtml_legend=1 00:16:14.836 --rc geninfo_all_blocks=1 00:16:14.836 --rc geninfo_unexecuted_blocks=1 00:16:14.836 00:16:14.836 ' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:14.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.836 --rc genhtml_branch_coverage=1 00:16:14.836 --rc genhtml_function_coverage=1 00:16:14.836 --rc genhtml_legend=1 00:16:14.836 --rc geninfo_all_blocks=1 00:16:14.836 --rc geninfo_unexecuted_blocks=1 00:16:14.836 00:16:14.836 ' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:14.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.836 --rc genhtml_branch_coverage=1 00:16:14.836 --rc genhtml_function_coverage=1 00:16:14.836 --rc genhtml_legend=1 00:16:14.836 --rc geninfo_all_blocks=1 00:16:14.836 --rc geninfo_unexecuted_blocks=1 00:16:14.836 00:16:14.836 ' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:14.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.836 --rc genhtml_branch_coverage=1 00:16:14.836 --rc genhtml_function_coverage=1 00:16:14.836 --rc genhtml_legend=1 00:16:14.836 --rc geninfo_all_blocks=1 00:16:14.836 --rc geninfo_unexecuted_blocks=1 00:16:14.836 00:16:14.836 ' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.836 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.837 07:25:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:22.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:22.978 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:22.979 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:22.979 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:22.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.979 07:25:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:22.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:16:22.979 00:16:22.979 --- 10.0.0.2 ping statistics --- 00:16:22.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.979 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:16:22.979 00:16:22.979 --- 10.0.0.1 ping statistics --- 00:16:22.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.979 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1394455 00:16:22.979 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1394455 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1394455 ']' 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.980 07:25:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:22.980 [2024-11-26 07:25:50.301477] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:16:22.980 [2024-11-26 07:25:50.301545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.980 [2024-11-26 07:25:50.403104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.980 [2024-11-26 07:25:50.453683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.980 [2024-11-26 07:25:50.453739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.980 [2024-11-26 07:25:50.453747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.980 [2024-11-26 07:25:50.453754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.980 [2024-11-26 07:25:50.453761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.980 [2024-11-26 07:25:50.454538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 [2024-11-26 07:25:51.184902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 [2024-11-26 07:25:51.209142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 NULL1 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.241 07:25:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:23.241 [2024-11-26 07:25:51.278751] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:16:23.241 [2024-11-26 07:25:51.278799] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394596 ] 00:16:23.814 Attached to nqn.2016-06.io.spdk:cnode1 00:16:23.814 Namespace ID: 1 size: 1GB 00:16:23.814 fused_ordering(0) 00:16:23.814 fused_ordering(1) 00:16:23.814 fused_ordering(2) 00:16:23.814 fused_ordering(3) 00:16:23.814 fused_ordering(4) 00:16:23.814 fused_ordering(5) 00:16:23.814 fused_ordering(6) 00:16:23.814 fused_ordering(7) 00:16:23.814 fused_ordering(8) 00:16:23.814 fused_ordering(9) 00:16:23.814 fused_ordering(10) 00:16:23.814 fused_ordering(11) 00:16:23.814 fused_ordering(12) 00:16:23.814 fused_ordering(13) 00:16:23.814 fused_ordering(14) 00:16:23.814 fused_ordering(15) 00:16:23.814 fused_ordering(16) 00:16:23.814 fused_ordering(17) 00:16:23.814 fused_ordering(18) 00:16:23.814 fused_ordering(19) 00:16:23.814 fused_ordering(20) 00:16:23.814 fused_ordering(21) 00:16:23.814 fused_ordering(22) 00:16:23.814 fused_ordering(23) 00:16:23.814 fused_ordering(24) 00:16:23.814 fused_ordering(25) 00:16:23.814 fused_ordering(26) 00:16:23.814 fused_ordering(27) 00:16:23.814 fused_ordering(28) 00:16:23.814 fused_ordering(29) 00:16:23.814 fused_ordering(30) 00:16:23.814 fused_ordering(31) 00:16:23.814 fused_ordering(32) 00:16:23.814 fused_ordering(33) 00:16:23.814 fused_ordering(34) 00:16:23.814 fused_ordering(35) 00:16:23.814 fused_ordering(36) 00:16:23.814 fused_ordering(37) 00:16:23.814 fused_ordering(38) 00:16:23.814 fused_ordering(39) 00:16:23.814 fused_ordering(40) 00:16:23.814 fused_ordering(41) 00:16:23.814 fused_ordering(42) 00:16:23.814 fused_ordering(43) 00:16:23.814 fused_ordering(44) 00:16:23.814 fused_ordering(45) 00:16:23.814 fused_ordering(46) 00:16:23.814 fused_ordering(47) 00:16:23.814 fused_ordering(48) 00:16:23.814 fused_ordering(49) 00:16:23.814 fused_ordering(50) 00:16:23.814 fused_ordering(51) 00:16:23.814 fused_ordering(52) 00:16:23.814 fused_ordering(53) 00:16:23.814 fused_ordering(54) 00:16:23.814 fused_ordering(55) 00:16:23.814 fused_ordering(56) 00:16:23.814 fused_ordering(57) 00:16:23.814 fused_ordering(58) 00:16:23.814 fused_ordering(59) 00:16:23.814 fused_ordering(60) 00:16:23.814 fused_ordering(61) 00:16:23.814 fused_ordering(62) 00:16:23.814 fused_ordering(63) 00:16:23.814 fused_ordering(64) 00:16:23.814 fused_ordering(65) 00:16:23.814 fused_ordering(66) 00:16:23.814 fused_ordering(67) 00:16:23.814 fused_ordering(68) 00:16:23.814 fused_ordering(69) 00:16:23.814 fused_ordering(70) 00:16:23.814 fused_ordering(71) 00:16:23.814 fused_ordering(72) 00:16:23.814 fused_ordering(73) 00:16:23.814 fused_ordering(74) 00:16:23.814 fused_ordering(75) 00:16:23.814 fused_ordering(76) 00:16:23.814 fused_ordering(77) 00:16:23.814 fused_ordering(78) 00:16:23.814 fused_ordering(79) 00:16:23.814 fused_ordering(80) 00:16:23.814 fused_ordering(81) 00:16:23.814 fused_ordering(82) 00:16:23.814 fused_ordering(83) 00:16:23.814 fused_ordering(84) 00:16:23.814 fused_ordering(85) 00:16:23.814 fused_ordering(86) 00:16:23.814 fused_ordering(87) 00:16:23.814 fused_ordering(88) 00:16:23.814 fused_ordering(89) 00:16:23.814 fused_ordering(90) 00:16:23.814 fused_ordering(91) 00:16:23.814 fused_ordering(92) 00:16:23.814 fused_ordering(93) 00:16:23.814 fused_ordering(94) 00:16:23.814 fused_ordering(95) 00:16:23.814 fused_ordering(96) 00:16:23.814 fused_ordering(97) 00:16:23.814 fused_ordering(98) 00:16:23.814 fused_ordering(99) 00:16:23.814 fused_ordering(100) 00:16:23.814 fused_ordering(101) 00:16:23.814 fused_ordering(102) 00:16:23.814 fused_ordering(103) 00:16:23.814 fused_ordering(104) 00:16:23.814 fused_ordering(105) 00:16:23.814 fused_ordering(106) 00:16:23.814 fused_ordering(107) 00:16:23.814 fused_ordering(108) 00:16:23.814 fused_ordering(109) 00:16:23.814 fused_ordering(110) 00:16:23.814 fused_ordering(111) 00:16:23.814 fused_ordering(112) 00:16:23.814 fused_ordering(113) 00:16:23.814 fused_ordering(114) 00:16:23.814 fused_ordering(115) 00:16:23.814 fused_ordering(116) 00:16:23.814 fused_ordering(117) 00:16:23.814 fused_ordering(118) 00:16:23.814 fused_ordering(119) 00:16:23.814 fused_ordering(120) 00:16:23.814 fused_ordering(121) 00:16:23.814 fused_ordering(122) 00:16:23.814 fused_ordering(123) 00:16:23.814 fused_ordering(124) 00:16:23.814 fused_ordering(125) 00:16:23.814 fused_ordering(126) 00:16:23.814 fused_ordering(127) 00:16:23.814 fused_ordering(128) 00:16:23.814 fused_ordering(129) 00:16:23.814 fused_ordering(130) 00:16:23.814 fused_ordering(131) 00:16:23.814 fused_ordering(132) 00:16:23.814 fused_ordering(133) 00:16:23.814 fused_ordering(134) 00:16:23.814 fused_ordering(135) 00:16:23.814 fused_ordering(136) 00:16:23.814 fused_ordering(137) 00:16:23.814 fused_ordering(138) 00:16:23.814 fused_ordering(139) 00:16:23.814 fused_ordering(140) 00:16:23.814 fused_ordering(141) 00:16:23.814 fused_ordering(142) 00:16:23.814 fused_ordering(143) 00:16:23.814 fused_ordering(144) 00:16:23.814 fused_ordering(145) 00:16:23.814 fused_ordering(146) 00:16:23.814 fused_ordering(147) 00:16:23.814 fused_ordering(148) 00:16:23.814 fused_ordering(149) 00:16:23.814 fused_ordering(150) 00:16:23.814 fused_ordering(151) 00:16:23.814 fused_ordering(152) 00:16:23.814 fused_ordering(153) 00:16:23.814 fused_ordering(154) 00:16:23.814 fused_ordering(155) 00:16:23.814 fused_ordering(156) 00:16:23.814 fused_ordering(157) 00:16:23.814 fused_ordering(158) 00:16:23.814 fused_ordering(159) 00:16:23.814 fused_ordering(160) 00:16:23.814 fused_ordering(161) 00:16:23.814 fused_ordering(162) 00:16:23.814 fused_ordering(163) 00:16:23.814 fused_ordering(164) 00:16:23.814 fused_ordering(165) 00:16:23.814 fused_ordering(166) 00:16:23.814 fused_ordering(167) 00:16:23.814 fused_ordering(168) 00:16:23.814 fused_ordering(169) 00:16:23.814 fused_ordering(170) 00:16:23.814 fused_ordering(171) 00:16:23.814 fused_ordering(172) 00:16:23.814 fused_ordering(173) 00:16:23.814 fused_ordering(174) 00:16:23.814 fused_ordering(175) 00:16:23.814 fused_ordering(176) 00:16:23.814 fused_ordering(177) 00:16:23.814 fused_ordering(178) 00:16:23.814 fused_ordering(179) 00:16:23.814 fused_ordering(180) 00:16:23.814 fused_ordering(181) 00:16:23.814 fused_ordering(182) 00:16:23.814 fused_ordering(183) 00:16:23.814 fused_ordering(184) 00:16:23.814 fused_ordering(185) 00:16:23.814 fused_ordering(186) 00:16:23.814 fused_ordering(187) 00:16:23.814 fused_ordering(188) 00:16:23.814 fused_ordering(189) 00:16:23.814 fused_ordering(190) 00:16:23.814 fused_ordering(191) 00:16:23.814 fused_ordering(192) 00:16:23.814 fused_ordering(193) 00:16:23.814 fused_ordering(194) 00:16:23.814 fused_ordering(195) 00:16:23.814 fused_ordering(196) 00:16:23.814 fused_ordering(197) 00:16:23.814 fused_ordering(198) 00:16:23.814 fused_ordering(199) 00:16:23.814 fused_ordering(200) 00:16:23.814 fused_ordering(201) 00:16:23.815 fused_ordering(202) 00:16:23.815 fused_ordering(203) 00:16:23.815 fused_ordering(204) 00:16:23.815 fused_ordering(205) 00:16:24.387 fused_ordering(206) 00:16:24.387 fused_ordering(207) 00:16:24.387 fused_ordering(208) 00:16:24.387 fused_ordering(209) 00:16:24.387 fused_ordering(210) 00:16:24.387 fused_ordering(211) 00:16:24.387 fused_ordering(212) 00:16:24.387 fused_ordering(213) 00:16:24.387 fused_ordering(214) 00:16:24.387 fused_ordering(215) 00:16:24.387 fused_ordering(216) 00:16:24.387 fused_ordering(217) 00:16:24.387 fused_ordering(218) 00:16:24.387 fused_ordering(219) 00:16:24.387 fused_ordering(220) 00:16:24.387 fused_ordering(221) 00:16:24.387 fused_ordering(222) 00:16:24.387 fused_ordering(223) 00:16:24.387 fused_ordering(224) 00:16:24.387 fused_ordering(225) 00:16:24.387 fused_ordering(226) 00:16:24.387 fused_ordering(227) 00:16:24.387 fused_ordering(228) 00:16:24.387 fused_ordering(229) 00:16:24.387 fused_ordering(230) 00:16:24.387 fused_ordering(231) 00:16:24.387 fused_ordering(232) 00:16:24.387 fused_ordering(233) 00:16:24.387 fused_ordering(234) 00:16:24.387 fused_ordering(235) 00:16:24.387 fused_ordering(236) 00:16:24.387 fused_ordering(237) 00:16:24.387 fused_ordering(238) 00:16:24.387 fused_ordering(239) 00:16:24.387 fused_ordering(240) 00:16:24.387 fused_ordering(241) 00:16:24.387 fused_ordering(242) 00:16:24.387 fused_ordering(243) 00:16:24.387 fused_ordering(244) 00:16:24.387 fused_ordering(245) 00:16:24.387 fused_ordering(246) 00:16:24.387 fused_ordering(247) 00:16:24.387 fused_ordering(248) 00:16:24.387 fused_ordering(249) 00:16:24.387 fused_ordering(250) 00:16:24.387 fused_ordering(251) 00:16:24.387 fused_ordering(252) 00:16:24.387 fused_ordering(253) 00:16:24.387 fused_ordering(254) 00:16:24.387 fused_ordering(255) 00:16:24.387 fused_ordering(256) 00:16:24.387 fused_ordering(257) 00:16:24.387 fused_ordering(258) 00:16:24.387 fused_ordering(259) 00:16:24.387 fused_ordering(260) 00:16:24.387 fused_ordering(261) 00:16:24.387 fused_ordering(262) 00:16:24.387 fused_ordering(263) 00:16:24.387 fused_ordering(264) 00:16:24.387 fused_ordering(265) 00:16:24.387 fused_ordering(266) 00:16:24.387 fused_ordering(267) 00:16:24.387 fused_ordering(268) 00:16:24.387 fused_ordering(269) 00:16:24.387 fused_ordering(270) 00:16:24.387 fused_ordering(271) 00:16:24.387 fused_ordering(272) 00:16:24.387 fused_ordering(273) 00:16:24.387 fused_ordering(274) 00:16:24.387 fused_ordering(275) 00:16:24.387 fused_ordering(276) 00:16:24.387 fused_ordering(277) 00:16:24.387 fused_ordering(278) 00:16:24.387 fused_ordering(279) 00:16:24.387 fused_ordering(280) 00:16:24.387 fused_ordering(281) 00:16:24.387 fused_ordering(282) 00:16:24.387 fused_ordering(283) 00:16:24.387 fused_ordering(284) 00:16:24.387 fused_ordering(285) 00:16:24.387 fused_ordering(286) 00:16:24.387 fused_ordering(287) 00:16:24.387 fused_ordering(288) 00:16:24.387 fused_ordering(289) 00:16:24.387 fused_ordering(290) 00:16:24.387 fused_ordering(291) 00:16:24.387 fused_ordering(292) 00:16:24.387 fused_ordering(293) 00:16:24.387 fused_ordering(294) 00:16:24.387 fused_ordering(295) 00:16:24.387 fused_ordering(296) 00:16:24.387 fused_ordering(297) 00:16:24.387 fused_ordering(298) 00:16:24.387 fused_ordering(299) 00:16:24.387 fused_ordering(300) 00:16:24.387 fused_ordering(301) 00:16:24.387 fused_ordering(302) 00:16:24.387 fused_ordering(303) 00:16:24.387 fused_ordering(304) 00:16:24.387 fused_ordering(305) 00:16:24.387 fused_ordering(306) 00:16:24.387 fused_ordering(307) 00:16:24.387 fused_ordering(308) 00:16:24.387 fused_ordering(309) 00:16:24.387 fused_ordering(310) 00:16:24.387 fused_ordering(311) 00:16:24.387 fused_ordering(312) 00:16:24.387 fused_ordering(313) 00:16:24.387 fused_ordering(314) 00:16:24.387 fused_ordering(315) 00:16:24.387 fused_ordering(316) 00:16:24.387 fused_ordering(317) 00:16:24.387 fused_ordering(318) 00:16:24.387 fused_ordering(319) 00:16:24.387 fused_ordering(320) 00:16:24.387 fused_ordering(321) 00:16:24.387 fused_ordering(322) 00:16:24.387 fused_ordering(323) 00:16:24.387 fused_ordering(324) 00:16:24.387 fused_ordering(325) 00:16:24.387 fused_ordering(326) 00:16:24.387 fused_ordering(327) 00:16:24.387 fused_ordering(328) 00:16:24.387 fused_ordering(329) 00:16:24.387 fused_ordering(330) 00:16:24.387 fused_ordering(331) 00:16:24.387 fused_ordering(332) 00:16:24.387 fused_ordering(333) 00:16:24.387 fused_ordering(334) 00:16:24.387 fused_ordering(335) 00:16:24.387 fused_ordering(336) 00:16:24.387 fused_ordering(337) 00:16:24.387 fused_ordering(338) 00:16:24.387 fused_ordering(339) 00:16:24.387 fused_ordering(340) 00:16:24.387 fused_ordering(341) 00:16:24.387 fused_ordering(342) 00:16:24.387 fused_ordering(343) 00:16:24.387 fused_ordering(344) 00:16:24.387 fused_ordering(345) 00:16:24.387 fused_ordering(346) 00:16:24.387 fused_ordering(347) 00:16:24.387 fused_ordering(348) 00:16:24.387 fused_ordering(349) 00:16:24.387 fused_ordering(350) 00:16:24.387 fused_ordering(351) 00:16:24.387 fused_ordering(352) 00:16:24.387 fused_ordering(353) 00:16:24.387 fused_ordering(354) 00:16:24.387 fused_ordering(355) 00:16:24.387 fused_ordering(356) 00:16:24.387 fused_ordering(357) 00:16:24.387 fused_ordering(358) 00:16:24.387 fused_ordering(359) 00:16:24.387 fused_ordering(360) 00:16:24.387 fused_ordering(361) 00:16:24.387 fused_ordering(362) 00:16:24.387 fused_ordering(363) 00:16:24.387 fused_ordering(364) 00:16:24.387 fused_ordering(365) 00:16:24.387 fused_ordering(366) 00:16:24.387 fused_ordering(367) 00:16:24.387 fused_ordering(368) 00:16:24.387 fused_ordering(369) 00:16:24.387 fused_ordering(370) 00:16:24.387 fused_ordering(371) 00:16:24.387 fused_ordering(372) 00:16:24.387 fused_ordering(373) 00:16:24.387 fused_ordering(374) 00:16:24.387 fused_ordering(375) 00:16:24.387 fused_ordering(376) 00:16:24.387 fused_ordering(377) 00:16:24.387 fused_ordering(378) 00:16:24.387 fused_ordering(379) 00:16:24.387 fused_ordering(380) 00:16:24.387 fused_ordering(381) 00:16:24.387 fused_ordering(382) 00:16:24.387 fused_ordering(383) 00:16:24.387 fused_ordering(384) 00:16:24.387 fused_ordering(385) 00:16:24.387 fused_ordering(386) 00:16:24.387 fused_ordering(387) 00:16:24.387 fused_ordering(388) 00:16:24.387 fused_ordering(389) 00:16:24.387 fused_ordering(390) 00:16:24.387 fused_ordering(391) 00:16:24.387 fused_ordering(392) 00:16:24.387 fused_ordering(393) 00:16:24.387 fused_ordering(394) 00:16:24.387 fused_ordering(395) 00:16:24.387 fused_ordering(396) 00:16:24.387 fused_ordering(397) 00:16:24.387 fused_ordering(398) 00:16:24.387 fused_ordering(399) 00:16:24.387 fused_ordering(400) 00:16:24.387 fused_ordering(401) 00:16:24.387 fused_ordering(402) 00:16:24.387 fused_ordering(403) 00:16:24.387 fused_ordering(404) 00:16:24.387 fused_ordering(405) 00:16:24.387 fused_ordering(406) 00:16:24.387 fused_ordering(407) 00:16:24.387 fused_ordering(408) 00:16:24.387 fused_ordering(409) 00:16:24.387 fused_ordering(410) 00:16:24.648 fused_ordering(411) 00:16:24.648 fused_ordering(412) 00:16:24.648 fused_ordering(413) 00:16:24.648 fused_ordering(414) 00:16:24.648 fused_ordering(415) 00:16:24.648 fused_ordering(416) 00:16:24.648 fused_ordering(417) 00:16:24.648 fused_ordering(418) 00:16:24.648 fused_ordering(419) 00:16:24.648 fused_ordering(420) 00:16:24.648 fused_ordering(421) 00:16:24.648 fused_ordering(422) 00:16:24.648 fused_ordering(423) 00:16:24.648 fused_ordering(424) 00:16:24.648 fused_ordering(425) 00:16:24.648 fused_ordering(426) 00:16:24.648 fused_ordering(427) 00:16:24.648 fused_ordering(428) 00:16:24.648 fused_ordering(429) 00:16:24.648 fused_ordering(430) 00:16:24.648 fused_ordering(431) 00:16:24.648 fused_ordering(432) 00:16:24.648 fused_ordering(433) 00:16:24.648 fused_ordering(434) 00:16:24.648 fused_ordering(435) 00:16:24.648 fused_ordering(436) 00:16:24.648 fused_ordering(437) 00:16:24.648 fused_ordering(438) 00:16:24.648 fused_ordering(439) 00:16:24.648 fused_ordering(440) 00:16:24.648 fused_ordering(441) 00:16:24.648 fused_ordering(442) 00:16:24.648 fused_ordering(443) 00:16:24.648 fused_ordering(444) 00:16:24.648 fused_ordering(445) 00:16:24.648 fused_ordering(446) 00:16:24.648 fused_ordering(447) 00:16:24.648 fused_ordering(448) 00:16:24.648 fused_ordering(449) 00:16:24.648 fused_ordering(450) 00:16:24.648 fused_ordering(451) 00:16:24.648 fused_ordering(452) 00:16:24.648 fused_ordering(453) 00:16:24.648 fused_ordering(454) 00:16:24.648 fused_ordering(455) 00:16:24.648 fused_ordering(456) 00:16:24.648 fused_ordering(457) 00:16:24.648 fused_ordering(458) 00:16:24.648 fused_ordering(459) 00:16:24.648 fused_ordering(460) 00:16:24.648 fused_ordering(461) 00:16:24.648 fused_ordering(462) 00:16:24.648 fused_ordering(463) 00:16:24.648 fused_ordering(464) 00:16:24.648 fused_ordering(465) 00:16:24.648 fused_ordering(466) 00:16:24.649 fused_ordering(467) 00:16:24.649 fused_ordering(468) 00:16:24.649 fused_ordering(469) 00:16:24.649 fused_ordering(470) 00:16:24.649 fused_ordering(471) 00:16:24.649 fused_ordering(472) 00:16:24.649 fused_ordering(473) 00:16:24.649 fused_ordering(474) 00:16:24.649 fused_ordering(475) 00:16:24.649 fused_ordering(476) 00:16:24.649 fused_ordering(477) 00:16:24.649 fused_ordering(478) 00:16:24.649 fused_ordering(479) 00:16:24.649 fused_ordering(480) 00:16:24.649 fused_ordering(481) 00:16:24.649 fused_ordering(482) 00:16:24.649 fused_ordering(483) 00:16:24.649 fused_ordering(484) 00:16:24.649 fused_ordering(485) 00:16:24.649 fused_ordering(486) 00:16:24.649 fused_ordering(487) 00:16:24.649 fused_ordering(488) 00:16:24.649 fused_ordering(489) 00:16:24.649 fused_ordering(490) 00:16:24.649 fused_ordering(491) 00:16:24.649 fused_ordering(492) 00:16:24.649 fused_ordering(493) 00:16:24.649 fused_ordering(494) 00:16:24.649 fused_ordering(495) 00:16:24.649 fused_ordering(496) 00:16:24.649 fused_ordering(497) 00:16:24.649 fused_ordering(498) 00:16:24.649 fused_ordering(499) 00:16:24.649 fused_ordering(500) 00:16:24.649 fused_ordering(501) 00:16:24.649 fused_ordering(502) 00:16:24.649 fused_ordering(503) 00:16:24.649 fused_ordering(504) 00:16:24.649 fused_ordering(505) 00:16:24.649 fused_ordering(506) 00:16:24.649 fused_ordering(507) 00:16:24.649 fused_ordering(508) 00:16:24.649 fused_ordering(509) 00:16:24.649 fused_ordering(510) 00:16:24.649 fused_ordering(511) 00:16:24.649 fused_ordering(512) 00:16:24.649 fused_ordering(513) 00:16:24.649 fused_ordering(514) 00:16:24.649 fused_ordering(515) 00:16:24.649 fused_ordering(516) 00:16:24.649 fused_ordering(517) 00:16:24.649 fused_ordering(518) 00:16:24.649 fused_ordering(519) 00:16:24.649 fused_ordering(520) 00:16:24.649 fused_ordering(521) 00:16:24.649 fused_ordering(522) 00:16:24.649 fused_ordering(523) 00:16:24.649 fused_ordering(524) 00:16:24.649 fused_ordering(525) 00:16:24.649 fused_ordering(526) 00:16:24.649 fused_ordering(527) 00:16:24.649 fused_ordering(528) 00:16:24.649 fused_ordering(529) 00:16:24.649 fused_ordering(530) 00:16:24.649 fused_ordering(531) 00:16:24.649 fused_ordering(532) 00:16:24.649 fused_ordering(533) 00:16:24.649 fused_ordering(534) 00:16:24.649 fused_ordering(535) 00:16:24.649 fused_ordering(536) 00:16:24.649 fused_ordering(537) 00:16:24.649 fused_ordering(538) 00:16:24.649 fused_ordering(539) 00:16:24.649 fused_ordering(540) 00:16:24.649 fused_ordering(541) 00:16:24.649 fused_ordering(542) 00:16:24.649 fused_ordering(543) 00:16:24.649 fused_ordering(544) 00:16:24.649 fused_ordering(545) 00:16:24.649 fused_ordering(546) 00:16:24.649 fused_ordering(547) 00:16:24.649 fused_ordering(548) 00:16:24.649 fused_ordering(549) 00:16:24.649 fused_ordering(550) 00:16:24.649 fused_ordering(551) 00:16:24.649 fused_ordering(552) 00:16:24.649 fused_ordering(553) 00:16:24.649 fused_ordering(554) 00:16:24.649 fused_ordering(555) 00:16:24.649 fused_ordering(556) 00:16:24.649 fused_ordering(557) 00:16:24.649 fused_ordering(558) 00:16:24.649 fused_ordering(559) 00:16:24.649 fused_ordering(560) 00:16:24.649 fused_ordering(561) 00:16:24.649 fused_ordering(562) 00:16:24.649 fused_ordering(563) 00:16:24.649 fused_ordering(564) 00:16:24.649 fused_ordering(565) 00:16:24.649 fused_ordering(566) 00:16:24.649 fused_ordering(567) 00:16:24.649 fused_ordering(568) 00:16:24.649 fused_ordering(569) 00:16:24.649 fused_ordering(570) 00:16:24.649 fused_ordering(571) 00:16:24.649 fused_ordering(572) 00:16:24.649 fused_ordering(573) 00:16:24.649 fused_ordering(574) 00:16:24.649 fused_ordering(575) 00:16:24.649 fused_ordering(576) 00:16:24.649 fused_ordering(577) 00:16:24.649 fused_ordering(578) 00:16:24.649 fused_ordering(579) 00:16:24.649 fused_ordering(580) 00:16:24.649 fused_ordering(581) 00:16:24.649 fused_ordering(582) 00:16:24.649 fused_ordering(583) 00:16:24.649 fused_ordering(584) 00:16:24.649 fused_ordering(585) 00:16:24.649 fused_ordering(586) 00:16:24.649 fused_ordering(587) 00:16:24.649 fused_ordering(588) 00:16:24.649 fused_ordering(589) 00:16:24.649 fused_ordering(590) 00:16:24.649 fused_ordering(591) 00:16:24.649 fused_ordering(592) 00:16:24.649 fused_ordering(593) 00:16:24.649 fused_ordering(594) 00:16:24.649 fused_ordering(595) 00:16:24.649 fused_ordering(596) 00:16:24.649 fused_ordering(597) 00:16:24.649 fused_ordering(598) 00:16:24.649 fused_ordering(599) 00:16:24.649 fused_ordering(600) 00:16:24.649 fused_ordering(601) 00:16:24.649 fused_ordering(602) 00:16:24.649 fused_ordering(603) 00:16:24.649 fused_ordering(604) 00:16:24.649 fused_ordering(605) 00:16:24.649 fused_ordering(606) 00:16:24.649 fused_ordering(607) 00:16:24.649 fused_ordering(608) 00:16:24.649 fused_ordering(609) 00:16:24.649 fused_ordering(610) 00:16:24.649 fused_ordering(611) 00:16:24.649 fused_ordering(612) 00:16:24.649 fused_ordering(613) 00:16:24.649 fused_ordering(614) 00:16:24.649 fused_ordering(615) 00:16:25.221 fused_ordering(616) 00:16:25.221 fused_ordering(617) 00:16:25.221 fused_ordering(618) 00:16:25.221 fused_ordering(619) 00:16:25.221 fused_ordering(620) 00:16:25.221 fused_ordering(621) 00:16:25.221 fused_ordering(622) 00:16:25.221 fused_ordering(623) 00:16:25.221 fused_ordering(624) 00:16:25.221 fused_ordering(625) 00:16:25.221 fused_ordering(626) 00:16:25.221 fused_ordering(627) 00:16:25.221 fused_ordering(628) 00:16:25.221 fused_ordering(629) 00:16:25.221 fused_ordering(630) 00:16:25.221 fused_ordering(631) 00:16:25.221 fused_ordering(632) 00:16:25.221 fused_ordering(633) 00:16:25.221 fused_ordering(634) 00:16:25.221 fused_ordering(635) 00:16:25.221 fused_ordering(636) 00:16:25.221 fused_ordering(637) 00:16:25.221 fused_ordering(638) 00:16:25.221 fused_ordering(639) 00:16:25.221 fused_ordering(640) 00:16:25.221 fused_ordering(641) 00:16:25.221 fused_ordering(642) 00:16:25.221 fused_ordering(643) 00:16:25.221 fused_ordering(644) 00:16:25.221 fused_ordering(645) 00:16:25.221 fused_ordering(646) 00:16:25.221 fused_ordering(647) 00:16:25.221 fused_ordering(648) 00:16:25.221 fused_ordering(649) 00:16:25.221 fused_ordering(650) 00:16:25.221 fused_ordering(651) 00:16:25.221 fused_ordering(652) 00:16:25.221 fused_ordering(653) 00:16:25.221 fused_ordering(654) 00:16:25.221 fused_ordering(655) 00:16:25.221 fused_ordering(656) 00:16:25.221 fused_ordering(657) 00:16:25.221 fused_ordering(658) 00:16:25.221 fused_ordering(659) 00:16:25.221 fused_ordering(660) 00:16:25.221 fused_ordering(661) 00:16:25.221 fused_ordering(662) 00:16:25.221 fused_ordering(663) 00:16:25.221 fused_ordering(664) 00:16:25.221 fused_ordering(665) 00:16:25.221 fused_ordering(666) 00:16:25.221 fused_ordering(667) 00:16:25.221 fused_ordering(668) 00:16:25.221 fused_ordering(669) 00:16:25.221 fused_ordering(670) 00:16:25.221 fused_ordering(671) 00:16:25.221 fused_ordering(672) 00:16:25.221 fused_ordering(673) 00:16:25.221 fused_ordering(674) 00:16:25.221 fused_ordering(675) 00:16:25.221 fused_ordering(676) 00:16:25.221 fused_ordering(677) 00:16:25.221 fused_ordering(678) 00:16:25.221 fused_ordering(679) 00:16:25.221 fused_ordering(680) 00:16:25.221 fused_ordering(681) 00:16:25.221 fused_ordering(682) 00:16:25.221 fused_ordering(683) 00:16:25.221 fused_ordering(684) 00:16:25.221 fused_ordering(685) 00:16:25.221 fused_ordering(686) 00:16:25.221 fused_ordering(687) 00:16:25.221 fused_ordering(688) 00:16:25.221 fused_ordering(689) 00:16:25.221 fused_ordering(690) 00:16:25.221 fused_ordering(691) 00:16:25.221 fused_ordering(692) 00:16:25.221 fused_ordering(693) 00:16:25.221 fused_ordering(694) 00:16:25.221 fused_ordering(695) 00:16:25.221 fused_ordering(696) 00:16:25.221 fused_ordering(697) 00:16:25.221 fused_ordering(698) 00:16:25.221 fused_ordering(699) 00:16:25.221 fused_ordering(700) 00:16:25.221 fused_ordering(701) 00:16:25.221 fused_ordering(702) 00:16:25.221 fused_ordering(703) 00:16:25.221 fused_ordering(704) 00:16:25.221 fused_ordering(705) 00:16:25.221 fused_ordering(706) 00:16:25.221 fused_ordering(707) 00:16:25.221 fused_ordering(708) 00:16:25.221 fused_ordering(709) 00:16:25.221 fused_ordering(710) 00:16:25.221 fused_ordering(711) 00:16:25.221 fused_ordering(712) 00:16:25.221 fused_ordering(713) 00:16:25.221 fused_ordering(714) 00:16:25.221 fused_ordering(715) 00:16:25.221 fused_ordering(716) 00:16:25.221 fused_ordering(717) 00:16:25.221 fused_ordering(718) 00:16:25.221 fused_ordering(719) 00:16:25.221 fused_ordering(720) 00:16:25.221 fused_ordering(721) 00:16:25.221 fused_ordering(722) 00:16:25.221 fused_ordering(723) 00:16:25.221 fused_ordering(724) 00:16:25.221 fused_ordering(725) 00:16:25.221 fused_ordering(726) 00:16:25.221 fused_ordering(727) 00:16:25.221 fused_ordering(728) 00:16:25.221 fused_ordering(729) 00:16:25.221 fused_ordering(730) 00:16:25.221 fused_ordering(731) 00:16:25.222 fused_ordering(732) 00:16:25.222 fused_ordering(733) 00:16:25.222 fused_ordering(734) 00:16:25.222 fused_ordering(735) 00:16:25.222 fused_ordering(736) 00:16:25.222 fused_ordering(737) 00:16:25.222 fused_ordering(738) 00:16:25.222 fused_ordering(739) 00:16:25.222 fused_ordering(740) 00:16:25.222 fused_ordering(741) 00:16:25.222 fused_ordering(742) 00:16:25.222 fused_ordering(743) 00:16:25.222 fused_ordering(744) 00:16:25.222 fused_ordering(745) 00:16:25.222 fused_ordering(746) 00:16:25.222 fused_ordering(747) 00:16:25.222 fused_ordering(748) 00:16:25.222 fused_ordering(749) 00:16:25.222 fused_ordering(750) 00:16:25.222 fused_ordering(751) 00:16:25.222 fused_ordering(752) 00:16:25.222 fused_ordering(753) 00:16:25.222 fused_ordering(754) 00:16:25.222 fused_ordering(755) 00:16:25.222 fused_ordering(756) 00:16:25.222 fused_ordering(757) 00:16:25.222 fused_ordering(758) 00:16:25.222 fused_ordering(759) 00:16:25.222 fused_ordering(760) 00:16:25.222 fused_ordering(761) 00:16:25.222 fused_ordering(762) 00:16:25.222 fused_ordering(763) 00:16:25.222 fused_ordering(764) 00:16:25.222 fused_ordering(765) 00:16:25.222 fused_ordering(766) 00:16:25.222 fused_ordering(767) 00:16:25.222 fused_ordering(768) 00:16:25.222 fused_ordering(769) 00:16:25.222 fused_ordering(770) 00:16:25.222 fused_ordering(771) 00:16:25.222 fused_ordering(772) 00:16:25.222 fused_ordering(773) 00:16:25.222 fused_ordering(774) 00:16:25.222 fused_ordering(775) 00:16:25.222 fused_ordering(776) 00:16:25.222 fused_ordering(777) 00:16:25.222 fused_ordering(778) 00:16:25.222 fused_ordering(779) 00:16:25.222 fused_ordering(780) 00:16:25.222 fused_ordering(781) 00:16:25.222 fused_ordering(782) 00:16:25.222 fused_ordering(783) 00:16:25.222 fused_ordering(784) 00:16:25.222 fused_ordering(785) 00:16:25.222 fused_ordering(786) 00:16:25.222 fused_ordering(787) 00:16:25.222 fused_ordering(788) 00:16:25.222 fused_ordering(789) 00:16:25.222 fused_ordering(790) 00:16:25.222 fused_ordering(791) 00:16:25.222 fused_ordering(792) 00:16:25.222 fused_ordering(793) 00:16:25.222 fused_ordering(794) 00:16:25.222 fused_ordering(795) 00:16:25.222 fused_ordering(796) 00:16:25.222 fused_ordering(797) 00:16:25.222 fused_ordering(798) 00:16:25.222 fused_ordering(799) 00:16:25.222 fused_ordering(800) 00:16:25.222 fused_ordering(801) 00:16:25.222 fused_ordering(802) 00:16:25.222 fused_ordering(803) 00:16:25.222 fused_ordering(804) 00:16:25.222 fused_ordering(805) 00:16:25.222 fused_ordering(806) 00:16:25.222 fused_ordering(807) 00:16:25.222 fused_ordering(808) 00:16:25.222 fused_ordering(809) 00:16:25.222 fused_ordering(810) 00:16:25.222 fused_ordering(811) 00:16:25.222 fused_ordering(812) 00:16:25.222 fused_ordering(813) 00:16:25.222 fused_ordering(814) 00:16:25.222 fused_ordering(815) 00:16:25.222 fused_ordering(816) 00:16:25.222 fused_ordering(817) 00:16:25.222 fused_ordering(818) 00:16:25.222 fused_ordering(819) 00:16:25.222 fused_ordering(820) 00:16:25.795 fused_ordering(821) 00:16:25.795 fused_ordering(822) 00:16:25.795 fused_ordering(823) 00:16:25.795 fused_ordering(824) 00:16:25.795 fused_ordering(825) 00:16:25.795 fused_ordering(826) 00:16:25.795 fused_ordering(827) 00:16:25.795 fused_ordering(828) 00:16:25.795 fused_ordering(829) 00:16:25.795 fused_ordering(830) 00:16:25.795 fused_ordering(831) 00:16:25.795 fused_ordering(832) 00:16:25.795 fused_ordering(833) 00:16:25.795 fused_ordering(834) 00:16:25.795 fused_ordering(835) 00:16:25.795 fused_ordering(836) 00:16:25.795 fused_ordering(837) 00:16:25.795 fused_ordering(838) 00:16:25.795 fused_ordering(839) 00:16:25.795 fused_ordering(840) 00:16:25.795 fused_ordering(841) 00:16:25.795 fused_ordering(842) 00:16:25.795 fused_ordering(843) 00:16:25.795 fused_ordering(844) 00:16:25.795 fused_ordering(845) 00:16:25.795 fused_ordering(846) 00:16:25.795 fused_ordering(847) 00:16:25.795 fused_ordering(848) 00:16:25.795 fused_ordering(849) 00:16:25.795 fused_ordering(850) 00:16:25.795 fused_ordering(851) 00:16:25.795 fused_ordering(852) 00:16:25.795 fused_ordering(853) 00:16:25.795 fused_ordering(854) 00:16:25.795 fused_ordering(855) 00:16:25.795 fused_ordering(856) 00:16:25.795 fused_ordering(857) 00:16:25.795 fused_ordering(858) 00:16:25.795 fused_ordering(859) 00:16:25.795 fused_ordering(860) 00:16:25.795 fused_ordering(861) 00:16:25.795 fused_ordering(862) 00:16:25.795 fused_ordering(863) 00:16:25.795 fused_ordering(864) 00:16:25.795 fused_ordering(865) 00:16:25.795 fused_ordering(866) 00:16:25.795 fused_ordering(867) 00:16:25.795 fused_ordering(868) 00:16:25.795 fused_ordering(869) 00:16:25.795 fused_ordering(870) 00:16:25.795 fused_ordering(871) 00:16:25.795 fused_ordering(872) 00:16:25.795 fused_ordering(873) 00:16:25.795 fused_ordering(874) 00:16:25.795 fused_ordering(875) 00:16:25.795 fused_ordering(876) 00:16:25.795 fused_ordering(877) 00:16:25.795 fused_ordering(878) 00:16:25.795 fused_ordering(879) 00:16:25.795 fused_ordering(880) 00:16:25.795 fused_ordering(881) 00:16:25.795 fused_ordering(882) 00:16:25.795 fused_ordering(883) 00:16:25.795 fused_ordering(884) 00:16:25.795 fused_ordering(885) 00:16:25.795 fused_ordering(886) 00:16:25.795 fused_ordering(887) 00:16:25.795 fused_ordering(888) 00:16:25.795 fused_ordering(889) 00:16:25.795 fused_ordering(890) 00:16:25.795 fused_ordering(891) 00:16:25.795 fused_ordering(892) 00:16:25.795 fused_ordering(893) 00:16:25.795 fused_ordering(894) 00:16:25.795 fused_ordering(895) 00:16:25.795 fused_ordering(896) 00:16:25.795 fused_ordering(897) 00:16:25.795 fused_ordering(898) 00:16:25.795 fused_ordering(899) 00:16:25.795 fused_ordering(900) 00:16:25.795 fused_ordering(901) 00:16:25.795 fused_ordering(902) 00:16:25.795 fused_ordering(903) 00:16:25.795 fused_ordering(904) 00:16:25.795 fused_ordering(905) 00:16:25.795 fused_ordering(906) 00:16:25.795 fused_ordering(907) 00:16:25.795 fused_ordering(908) 00:16:25.795 fused_ordering(909) 00:16:25.795 fused_ordering(910) 00:16:25.795 fused_ordering(911) 00:16:25.795 fused_ordering(912) 00:16:25.795 fused_ordering(913) 00:16:25.795 fused_ordering(914) 00:16:25.795 fused_ordering(915) 00:16:25.795 fused_ordering(916) 00:16:25.795 fused_ordering(917) 00:16:25.795 fused_ordering(918) 00:16:25.795 fused_ordering(919) 00:16:25.795 fused_ordering(920) 00:16:25.795 fused_ordering(921) 00:16:25.795 fused_ordering(922) 00:16:25.795 fused_ordering(923) 00:16:25.795 fused_ordering(924) 00:16:25.795 fused_ordering(925) 00:16:25.795 fused_ordering(926) 00:16:25.795 fused_ordering(927) 00:16:25.795 fused_ordering(928) 00:16:25.795 fused_ordering(929) 00:16:25.795 fused_ordering(930) 00:16:25.795 fused_ordering(931) 00:16:25.795 fused_ordering(932) 00:16:25.795 fused_ordering(933) 00:16:25.795 fused_ordering(934) 00:16:25.795 fused_ordering(935) 00:16:25.795 fused_ordering(936) 00:16:25.795 fused_ordering(937) 00:16:25.795 fused_ordering(938) 00:16:25.795 fused_ordering(939) 00:16:25.795 fused_ordering(940) 00:16:25.795 fused_ordering(941) 00:16:25.795 fused_ordering(942) 00:16:25.795 fused_ordering(943) 00:16:25.795 fused_ordering(944) 00:16:25.795 fused_ordering(945) 00:16:25.795 fused_ordering(946) 00:16:25.795 fused_ordering(947) 00:16:25.795 fused_ordering(948) 00:16:25.795 fused_ordering(949) 00:16:25.795 fused_ordering(950) 00:16:25.795 fused_ordering(951) 00:16:25.795 fused_ordering(952) 00:16:25.795 fused_ordering(953) 00:16:25.795 fused_ordering(954) 00:16:25.795 fused_ordering(955) 00:16:25.795 fused_ordering(956) 00:16:25.795 fused_ordering(957) 00:16:25.795 fused_ordering(958) 00:16:25.795 fused_ordering(959) 00:16:25.795 fused_ordering(960) 00:16:25.795 fused_ordering(961) 00:16:25.795 fused_ordering(962) 00:16:25.795 fused_ordering(963) 00:16:25.795 fused_ordering(964) 00:16:25.795 fused_ordering(965) 00:16:25.795 fused_ordering(966) 00:16:25.795 fused_ordering(967) 00:16:25.795 fused_ordering(968) 00:16:25.795 fused_ordering(969) 00:16:25.795 fused_ordering(970) 00:16:25.795 fused_ordering(971) 00:16:25.795 fused_ordering(972) 00:16:25.795 fused_ordering(973) 00:16:25.795 fused_ordering(974) 00:16:25.795 fused_ordering(975) 00:16:25.795 fused_ordering(976) 00:16:25.795 fused_ordering(977) 00:16:25.795 fused_ordering(978) 00:16:25.795 fused_ordering(979) 00:16:25.795 fused_ordering(980) 00:16:25.795 fused_ordering(981) 00:16:25.795 fused_ordering(982) 00:16:25.795 fused_ordering(983) 00:16:25.795 fused_ordering(984) 00:16:25.795 fused_ordering(985) 00:16:25.795 fused_ordering(986) 00:16:25.795 fused_ordering(987) 00:16:25.795 fused_ordering(988) 00:16:25.795 fused_ordering(989) 00:16:25.796 fused_ordering(990) 00:16:25.796 fused_ordering(991) 00:16:25.796 fused_ordering(992) 00:16:25.796 fused_ordering(993) 00:16:25.796 fused_ordering(994) 00:16:25.796 fused_ordering(995) 00:16:25.796 fused_ordering(996) 00:16:25.796 fused_ordering(997) 00:16:25.796 fused_ordering(998) 00:16:25.796 fused_ordering(999) 00:16:25.796 fused_ordering(1000) 00:16:25.796 fused_ordering(1001) 00:16:25.796 fused_ordering(1002) 00:16:25.796 fused_ordering(1003) 00:16:25.796 fused_ordering(1004) 00:16:25.796 fused_ordering(1005) 00:16:25.796 fused_ordering(1006) 00:16:25.796 fused_ordering(1007) 00:16:25.796 fused_ordering(1008) 00:16:25.796 fused_ordering(1009) 00:16:25.796 fused_ordering(1010) 00:16:25.796 fused_ordering(1011) 00:16:25.796 fused_ordering(1012) 00:16:25.796 fused_ordering(1013) 00:16:25.796 fused_ordering(1014) 00:16:25.796 fused_ordering(1015) 00:16:25.796 fused_ordering(1016) 00:16:25.796 fused_ordering(1017) 00:16:25.796 fused_ordering(1018) 00:16:25.796 fused_ordering(1019) 00:16:25.796 fused_ordering(1020) 00:16:25.796 fused_ordering(1021) 00:16:25.796 fused_ordering(1022) 00:16:25.796 fused_ordering(1023) 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.796 rmmod nvme_tcp 00:16:25.796 rmmod nvme_fabrics 00:16:25.796 rmmod nvme_keyring 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1394455 ']' 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1394455 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1394455 ']' 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1394455 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.796 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394455 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394455' 00:16:26.057 killing process with pid 1394455 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1394455 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1394455 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.057 07:25:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:28.603 00:16:28.603 real 0m13.596s 00:16:28.603 user 0m7.178s 00:16:28.603 sys 0m7.365s 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 ************************************ 00:16:28.603 END TEST nvmf_fused_ordering 00:16:28.603 ************************************ 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 ************************************ 00:16:28.603 START TEST nvmf_ns_masking 00:16:28.603 ************************************ 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:28.603 * Looking for test storage... 00:16:28.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.603 --rc genhtml_branch_coverage=1 00:16:28.603 --rc genhtml_function_coverage=1 00:16:28.603 --rc genhtml_legend=1 00:16:28.603 --rc geninfo_all_blocks=1 00:16:28.603 --rc geninfo_unexecuted_blocks=1 00:16:28.603 00:16:28.603 ' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.603 --rc genhtml_branch_coverage=1 00:16:28.603 --rc genhtml_function_coverage=1 00:16:28.603 --rc genhtml_legend=1 00:16:28.603 --rc geninfo_all_blocks=1 00:16:28.603 --rc geninfo_unexecuted_blocks=1 00:16:28.603 00:16:28.603 ' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.603 --rc genhtml_branch_coverage=1 00:16:28.603 --rc genhtml_function_coverage=1 00:16:28.603 --rc genhtml_legend=1 00:16:28.603 --rc geninfo_all_blocks=1 00:16:28.603 --rc geninfo_unexecuted_blocks=1 00:16:28.603 00:16:28.603 ' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:28.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.603 --rc genhtml_branch_coverage=1 00:16:28.603 --rc genhtml_function_coverage=1 00:16:28.603 --rc genhtml_legend=1 00:16:28.603 --rc geninfo_all_blocks=1 00:16:28.603 --rc geninfo_unexecuted_blocks=1 00:16:28.603 00:16:28.603 ' 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.603 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=34d8cad0-2b2e-4a8c-8e0b-08b679104ad2 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b073ea49-5d09-44cf-97cf-0ab4e3c36baa 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=eeaad576-7b7c-4bb0-88b2-9bbb862b445d 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.604 07:25:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:36.749 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.749 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:36.750 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:36.750 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:36.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:16:36.750 00:16:36.750 --- 10.0.0.2 ping statistics --- 00:16:36.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.750 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:16:36.750 00:16:36.750 --- 10.0.0.1 ping statistics --- 00:16:36.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.750 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1399327 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1399327 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1399327 ']' 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.750 07:26:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:36.750 [2024-11-26 07:26:04.013871] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:16:36.750 [2024-11-26 07:26:04.013942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.750 [2024-11-26 07:26:04.115408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.750 [2024-11-26 07:26:04.167041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.750 [2024-11-26 07:26:04.167091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.750 [2024-11-26 07:26:04.167100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.750 [2024-11-26 07:26:04.167108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.750 [2024-11-26 07:26:04.167114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.750 [2024-11-26 07:26:04.167874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.750 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.750 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:36.750 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.750 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.751 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:37.011 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.011 07:26:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.011 [2024-11-26 07:26:05.030490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.011 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:37.011 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:37.011 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:37.272 Malloc1 00:16:37.272 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:37.552 Malloc2 00:16:37.552 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:37.812 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:37.812 07:26:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.072 [2024-11-26 07:26:06.068417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.072 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:38.073 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eeaad576-7b7c-4bb0-88b2-9bbb862b445d -a 10.0.0.2 -s 4420 -i 4 00:16:38.333 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.333 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:38.333 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.333 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:38.333 07:26:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.247 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.247 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.247 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:40.508 [ 0]:0x1 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9dba4376720647598d4467d86899851a 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9dba4376720647598d4467d86899851a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.508 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:40.768 [ 0]:0x1 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9dba4376720647598d4467d86899851a 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9dba4376720647598d4467d86899851a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:40.768 [ 1]:0x2 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:40.768 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.031 07:26:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.031 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:41.291 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:41.291 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eeaad576-7b7c-4bb0-88b2-9bbb862b445d -a 10.0.0.2 -s 4420 -i 4 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:41.551 07:26:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.466 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:43.728 [ 0]:0x2 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.728 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:43.988 [ 0]:0x1 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9dba4376720647598d4467d86899851a 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9dba4376720647598d4467d86899851a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:43.988 [ 1]:0x2 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.988 07:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.251 [ 0]:0x2 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.251 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:44.512 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:44.512 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eeaad576-7b7c-4bb0-88b2-9bbb862b445d -a 10.0.0.2 -s 4420 -i 4 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:44.772 07:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:46.684 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:46.945 [ 0]:0x1 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9dba4376720647598d4467d86899851a 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9dba4376720647598d4467d86899851a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:46.945 [ 1]:0x2 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:46.945 07:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:46.945 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:46.945 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:46.945 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:47.205 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:47.206 [ 0]:0x2 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:47.206 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:47.502 [2024-11-26 07:26:15.454354] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:47.502 request: 00:16:47.502 { 00:16:47.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.502 "nsid": 2, 00:16:47.502 "host": "nqn.2016-06.io.spdk:host1", 00:16:47.502 "method": "nvmf_ns_remove_host", 00:16:47.502 "req_id": 1 00:16:47.502 } 00:16:47.502 Got JSON-RPC error response 00:16:47.502 response: 00:16:47.502 { 00:16:47.502 "code": -32602, 00:16:47.502 "message": "Invalid parameters" 00:16:47.502 } 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.502 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:47.503 [ 0]:0x2 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:47.503 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ab951dc12f4c4946826a87dbd77eb0aa 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ab951dc12f4c4946826a87dbd77eb0aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1401656 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1401656 /var/tmp/host.sock 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1401656 ']' 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:47.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.792 07:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:47.792 [2024-11-26 07:26:15.706191] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:16:47.792 [2024-11-26 07:26:15.706243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1401656 ] 00:16:47.792 [2024-11-26 07:26:15.794049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.792 [2024-11-26 07:26:15.830090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 34d8cad0-2b2e-4a8c-8e0b-08b679104ad2 00:16:48.766 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:49.026 07:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 34D8CAD02B2E4A8C8E0B08B679104AD2 -i 00:16:49.026 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b073ea49-5d09-44cf-97cf-0ab4e3c36baa 00:16:49.026 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:49.026 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B073EA495D0944CF97CF0AB4E3C36BAA -i 00:16:49.288 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:49.549 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:49.549 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:49.549 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:50.121 nvme0n1 00:16:50.121 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:50.121 07:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:50.121 nvme1n2 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:50.382 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:50.643 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 34d8cad0-2b2e-4a8c-8e0b-08b679104ad2 == \3\4\d\8\c\a\d\0\-\2\b\2\e\-\4\a\8\c\-\8\e\0\b\-\0\8\b\6\7\9\1\0\4\a\d\2 ]] 00:16:50.643 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:50.643 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:50.643 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:50.903 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b073ea49-5d09-44cf-97cf-0ab4e3c36baa == \b\0\7\3\e\a\4\9\-\5\d\0\9\-\4\4\c\f\-\9\7\c\f\-\0\a\b\4\e\3\c\3\6\b\a\a ]] 00:16:50.903 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.903 07:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 34d8cad0-2b2e-4a8c-8e0b-08b679104ad2 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 34D8CAD02B2E4A8C8E0B08B679104AD2 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 34D8CAD02B2E4A8C8E0B08B679104AD2 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:51.163 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 34D8CAD02B2E4A8C8E0B08B679104AD2 00:16:51.163 [2024-11-26 07:26:19.252290] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:51.163 [2024-11-26 07:26:19.252316] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:51.163 [2024-11-26 07:26:19.252324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:51.424 request: 00:16:51.424 { 00:16:51.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.424 "namespace": { 00:16:51.424 "bdev_name": "invalid", 00:16:51.424 "nsid": 1, 00:16:51.424 "nguid": "34D8CAD02B2E4A8C8E0B08B679104AD2", 00:16:51.424 "no_auto_visible": false 00:16:51.424 }, 00:16:51.424 "method": "nvmf_subsystem_add_ns", 00:16:51.424 "req_id": 1 00:16:51.424 } 00:16:51.424 Got JSON-RPC error response 00:16:51.424 response: 00:16:51.424 { 00:16:51.424 "code": -32602, 00:16:51.424 "message": "Invalid parameters" 00:16:51.424 } 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 34d8cad0-2b2e-4a8c-8e0b-08b679104ad2 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 34D8CAD02B2E4A8C8E0B08B679104AD2 -i 00:16:51.424 07:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1401656 ']' 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1401656' 00:16:53.967 killing process with pid 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1401656 00:16:53.967 07:26:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.227 rmmod nvme_tcp 00:16:54.227 rmmod nvme_fabrics 00:16:54.227 rmmod nvme_keyring 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.227 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1399327 ']' 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1399327 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1399327 ']' 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1399327 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1399327 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1399327' 00:16:54.228 killing process with pid 1399327 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1399327 00:16:54.228 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1399327 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.488 07:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.401 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.402 00:16:56.402 real 0m28.286s 00:16:56.402 user 0m32.126s 00:16:56.402 sys 0m8.303s 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:56.402 ************************************ 00:16:56.402 END TEST nvmf_ns_masking 00:16:56.402 ************************************ 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.402 07:26:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.664 ************************************ 00:16:56.664 START TEST nvmf_nvme_cli 00:16:56.664 ************************************ 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:56.664 * Looking for test storage... 00:16:56.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.664 --rc genhtml_branch_coverage=1 00:16:56.664 --rc genhtml_function_coverage=1 00:16:56.664 --rc genhtml_legend=1 00:16:56.664 --rc geninfo_all_blocks=1 00:16:56.664 --rc geninfo_unexecuted_blocks=1 00:16:56.664 00:16:56.664 ' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.664 --rc genhtml_branch_coverage=1 00:16:56.664 --rc genhtml_function_coverage=1 00:16:56.664 --rc genhtml_legend=1 00:16:56.664 --rc geninfo_all_blocks=1 00:16:56.664 --rc geninfo_unexecuted_blocks=1 00:16:56.664 00:16:56.664 ' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.664 --rc genhtml_branch_coverage=1 00:16:56.664 --rc genhtml_function_coverage=1 00:16:56.664 --rc genhtml_legend=1 00:16:56.664 --rc geninfo_all_blocks=1 00:16:56.664 --rc geninfo_unexecuted_blocks=1 00:16:56.664 00:16:56.664 ' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.664 --rc genhtml_branch_coverage=1 00:16:56.664 --rc genhtml_function_coverage=1 00:16:56.664 --rc genhtml_legend=1 00:16:56.664 --rc geninfo_all_blocks=1 00:16:56.664 --rc geninfo_unexecuted_blocks=1 00:16:56.664 00:16:56.664 ' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:56.664 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.665 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.927 07:26:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:05.074 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:05.075 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:05.075 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:05.075 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:05.075 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:05.075 07:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:05.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:17:05.075 00:17:05.075 --- 10.0.0.2 ping statistics --- 00:17:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.075 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:05.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:17:05.075 00:17:05.075 --- 10.0.0.1 ping statistics --- 00:17:05.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.075 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:05.075 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1407306 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1407306 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1407306 ']' 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.076 07:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.076 [2024-11-26 07:26:32.378594] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:17:05.076 [2024-11-26 07:26:32.378668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.076 [2024-11-26 07:26:32.478931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:05.076 [2024-11-26 07:26:32.534275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.076 [2024-11-26 07:26:32.534332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.076 [2024-11-26 07:26:32.534341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.076 [2024-11-26 07:26:32.534349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.076 [2024-11-26 07:26:32.534355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.076 [2024-11-26 07:26:32.536784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.076 [2024-11-26 07:26:32.536947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.076 [2024-11-26 07:26:32.537109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.076 [2024-11-26 07:26:32.537109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 [2024-11-26 07:26:33.257937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 Malloc0 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 Malloc1 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.338 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.339 [2024-11-26 07:26:33.370871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.339 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:17:05.600 00:17:05.600 Discovery Log Number of Records 2, Generation counter 2 00:17:05.600 =====Discovery Log Entry 0====== 00:17:05.600 trtype: tcp 00:17:05.600 adrfam: ipv4 00:17:05.600 subtype: current discovery subsystem 00:17:05.600 treq: not required 00:17:05.600 portid: 0 00:17:05.600 trsvcid: 4420 00:17:05.600 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:05.600 traddr: 10.0.0.2 00:17:05.600 eflags: explicit discovery connections, duplicate discovery information 00:17:05.600 sectype: none 00:17:05.600 =====Discovery Log Entry 1====== 00:17:05.600 trtype: tcp 00:17:05.600 adrfam: ipv4 00:17:05.600 subtype: nvme subsystem 00:17:05.600 treq: not required 00:17:05.600 portid: 0 00:17:05.600 trsvcid: 4420 00:17:05.600 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:05.600 traddr: 10.0.0.2 00:17:05.600 eflags: none 00:17:05.600 sectype: none 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:05.600 07:26:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:06.987 07:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:09.529 /dev/nvme0n2 ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:09.529 rmmod nvme_tcp 00:17:09.529 rmmod nvme_fabrics 00:17:09.529 rmmod nvme_keyring 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1407306 ']' 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1407306 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1407306 ']' 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1407306 00:17:09.529 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1407306 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1407306' 00:17:09.530 killing process with pid 1407306 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1407306 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1407306 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.530 07:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:12.070 00:17:12.070 real 0m15.118s 00:17:12.070 user 0m22.292s 00:17:12.070 sys 0m6.422s 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.070 ************************************ 00:17:12.070 END TEST nvmf_nvme_cli 00:17:12.070 ************************************ 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.070 ************************************ 00:17:12.070 START TEST nvmf_vfio_user 00:17:12.070 ************************************ 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:12.070 * Looking for test storage... 00:17:12.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.070 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.071 --rc genhtml_branch_coverage=1 00:17:12.071 --rc genhtml_function_coverage=1 00:17:12.071 --rc genhtml_legend=1 00:17:12.071 --rc geninfo_all_blocks=1 00:17:12.071 --rc geninfo_unexecuted_blocks=1 00:17:12.071 00:17:12.071 ' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.071 --rc genhtml_branch_coverage=1 00:17:12.071 --rc genhtml_function_coverage=1 00:17:12.071 --rc genhtml_legend=1 00:17:12.071 --rc geninfo_all_blocks=1 00:17:12.071 --rc geninfo_unexecuted_blocks=1 00:17:12.071 00:17:12.071 ' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.071 --rc genhtml_branch_coverage=1 00:17:12.071 --rc genhtml_function_coverage=1 00:17:12.071 --rc genhtml_legend=1 00:17:12.071 --rc geninfo_all_blocks=1 00:17:12.071 --rc geninfo_unexecuted_blocks=1 00:17:12.071 00:17:12.071 ' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.071 --rc genhtml_branch_coverage=1 00:17:12.071 --rc genhtml_function_coverage=1 00:17:12.071 --rc genhtml_legend=1 00:17:12.071 --rc geninfo_all_blocks=1 00:17:12.071 --rc geninfo_unexecuted_blocks=1 00:17:12.071 00:17:12.071 ' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1408864 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1408864' 00:17:12.071 Process pid: 1408864 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1408864 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1408864 ']' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:12.071 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.072 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.072 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.072 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.072 07:26:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:12.072 [2024-11-26 07:26:40.022701] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:17:12.072 [2024-11-26 07:26:40.022783] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.072 [2024-11-26 07:26:40.113704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.072 [2024-11-26 07:26:40.148549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.072 [2024-11-26 07:26:40.148580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.072 [2024-11-26 07:26:40.148586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.072 [2024-11-26 07:26:40.148591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.072 [2024-11-26 07:26:40.148595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.072 [2024-11-26 07:26:40.150112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.072 [2024-11-26 07:26:40.150267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.072 [2024-11-26 07:26:40.150517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.072 [2024-11-26 07:26:40.150517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.013 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.013 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:13.013 07:26:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:13.953 07:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:13.953 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:13.953 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:13.953 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:13.953 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:13.953 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:14.213 Malloc1 00:17:14.213 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:14.474 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:14.735 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:14.735 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:14.735 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:14.735 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:14.995 Malloc2 00:17:14.995 07:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:15.256 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:15.256 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:15.516 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:15.516 [2024-11-26 07:26:43.535340] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:17:15.516 [2024-11-26 07:26:43.535381] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409561 ] 00:17:15.516 [2024-11-26 07:26:43.574446] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:15.516 [2024-11-26 07:26:43.579752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:15.516 [2024-11-26 07:26:43.579770] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f28ff7ec000 00:17:15.516 [2024-11-26 07:26:43.580756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.581755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.582769] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.583774] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.584777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.585779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.586791] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.587796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:15.516 [2024-11-26 07:26:43.588799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:15.516 [2024-11-26 07:26:43.588806] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f28ff7e1000 00:17:15.516 [2024-11-26 07:26:43.589718] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:15.516 [2024-11-26 07:26:43.599165] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:15.516 [2024-11-26 07:26:43.599186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:17:15.516 [2024-11-26 07:26:43.604887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:15.517 [2024-11-26 07:26:43.604922] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:15.517 [2024-11-26 07:26:43.604981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:17:15.517 [2024-11-26 07:26:43.604993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:17:15.517 [2024-11-26 07:26:43.604997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:17:15.517 [2024-11-26 07:26:43.605885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:15.517 [2024-11-26 07:26:43.605892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:17:15.517 [2024-11-26 07:26:43.605897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:17:15.517 [2024-11-26 07:26:43.606894] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:15.517 [2024-11-26 07:26:43.606901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:17:15.517 [2024-11-26 07:26:43.606906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:15.517 [2024-11-26 07:26:43.607897] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:15.517 [2024-11-26 07:26:43.607903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:15.517 [2024-11-26 07:26:43.608904] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:15.517 [2024-11-26 07:26:43.608910] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:15.517 [2024-11-26 07:26:43.608914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:15.517 [2024-11-26 07:26:43.608919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:15.517 [2024-11-26 07:26:43.609024] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:17:15.517 [2024-11-26 07:26:43.609028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:15.517 [2024-11-26 07:26:43.609032] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:15.779 [2024-11-26 07:26:43.609911] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:15.779 [2024-11-26 07:26:43.610920] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:15.779 [2024-11-26 07:26:43.611918] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:15.779 [2024-11-26 07:26:43.612921] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:15.779 [2024-11-26 07:26:43.612972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:15.779 [2024-11-26 07:26:43.613925] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:15.779 [2024-11-26 07:26:43.613931] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:15.779 [2024-11-26 07:26:43.613934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.613951] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:17:15.779 [2024-11-26 07:26:43.613960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.613971] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:15.779 [2024-11-26 07:26:43.613974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:15.779 [2024-11-26 07:26:43.613977] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.779 [2024-11-26 07:26:43.613989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:15.779 [2024-11-26 07:26:43.614033] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:17:15.779 [2024-11-26 07:26:43.614036] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:17:15.779 [2024-11-26 07:26:43.614039] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:17:15.779 [2024-11-26 07:26:43.614043] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:15.779 [2024-11-26 07:26:43.614047] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:17:15.779 [2024-11-26 07:26:43.614051] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:17:15.779 [2024-11-26 07:26:43.614054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:15.779 [2024-11-26 07:26:43.614091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.779 [2024-11-26 07:26:43.614097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.779 [2024-11-26 07:26:43.614103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.779 [2024-11-26 07:26:43.614109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:15.779 [2024-11-26 07:26:43.614112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:15.779 [2024-11-26 07:26:43.614138] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:17:15.779 [2024-11-26 07:26:43.614142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:15.779 [2024-11-26 07:26:43.614213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:15.779 [2024-11-26 07:26:43.614227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:15.779 [2024-11-26 07:26:43.614230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.779 [2024-11-26 07:26:43.614234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:15.779 [2024-11-26 07:26:43.614254] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:17:15.779 [2024-11-26 07:26:43.614260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:15.779 [2024-11-26 07:26:43.614270] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:15.779 [2024-11-26 07:26:43.614273] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:15.779 [2024-11-26 07:26:43.614276] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.779 [2024-11-26 07:26:43.614280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:15.779 [2024-11-26 07:26:43.614299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614321] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:15.780 [2024-11-26 07:26:43.614324] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:15.780 [2024-11-26 07:26:43.614326] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.780 [2024-11-26 07:26:43.614331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614370] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:15.780 [2024-11-26 07:26:43.614373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:17:15.780 [2024-11-26 07:26:43.614377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:17:15.780 [2024-11-26 07:26:43.614390] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614459] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:15.780 [2024-11-26 07:26:43.614462] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:15.780 [2024-11-26 07:26:43.614465] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:15.780 [2024-11-26 07:26:43.614469] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:15.780 [2024-11-26 07:26:43.614471] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:15.780 [2024-11-26 07:26:43.614476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:15.780 [2024-11-26 07:26:43.614481] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:15.780 [2024-11-26 07:26:43.614484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:15.780 [2024-11-26 07:26:43.614486] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.780 [2024-11-26 07:26:43.614491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614496] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:15.780 [2024-11-26 07:26:43.614499] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:15.780 [2024-11-26 07:26:43.614501] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.780 [2024-11-26 07:26:43.614505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614511] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:15.780 [2024-11-26 07:26:43.614514] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:15.780 [2024-11-26 07:26:43.614516] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:15.780 [2024-11-26 07:26:43.614521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:15.780 [2024-11-26 07:26:43.614526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:15.780 [2024-11-26 07:26:43.614546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:15.780 ===================================================== 00:17:15.780 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:15.780 ===================================================== 00:17:15.780 Controller Capabilities/Features 00:17:15.780 ================================ 00:17:15.780 Vendor ID: 4e58 00:17:15.780 Subsystem Vendor ID: 4e58 00:17:15.780 Serial Number: SPDK1 00:17:15.780 Model Number: SPDK bdev Controller 00:17:15.780 Firmware Version: 25.01 00:17:15.780 Recommended Arb Burst: 6 00:17:15.780 IEEE OUI Identifier: 8d 6b 50 00:17:15.780 Multi-path I/O 00:17:15.780 May have multiple subsystem ports: Yes 00:17:15.780 May have multiple controllers: Yes 00:17:15.780 Associated with SR-IOV VF: No 00:17:15.780 Max Data Transfer Size: 131072 00:17:15.780 Max Number of Namespaces: 32 00:17:15.780 Max Number of I/O Queues: 127 00:17:15.780 NVMe Specification Version (VS): 1.3 00:17:15.780 NVMe Specification Version (Identify): 1.3 00:17:15.780 Maximum Queue Entries: 256 00:17:15.780 Contiguous Queues Required: Yes 00:17:15.780 Arbitration Mechanisms Supported 00:17:15.780 Weighted Round Robin: Not Supported 00:17:15.780 Vendor Specific: Not Supported 00:17:15.780 Reset Timeout: 15000 ms 00:17:15.780 Doorbell Stride: 4 bytes 00:17:15.780 NVM Subsystem Reset: Not Supported 00:17:15.780 Command Sets Supported 00:17:15.780 NVM Command Set: Supported 00:17:15.780 Boot Partition: Not Supported 00:17:15.780 Memory Page Size Minimum: 4096 bytes 00:17:15.780 Memory Page Size Maximum: 4096 bytes 00:17:15.780 Persistent Memory Region: Not Supported 00:17:15.780 Optional Asynchronous Events Supported 00:17:15.780 Namespace Attribute Notices: Supported 00:17:15.780 Firmware Activation Notices: Not Supported 00:17:15.780 ANA Change Notices: Not Supported 00:17:15.780 PLE Aggregate Log Change Notices: Not Supported 00:17:15.780 LBA Status Info Alert Notices: Not Supported 00:17:15.780 EGE Aggregate Log Change Notices: Not Supported 00:17:15.780 Normal NVM Subsystem Shutdown event: Not Supported 00:17:15.780 Zone Descriptor Change Notices: Not Supported 00:17:15.780 Discovery Log Change Notices: Not Supported 00:17:15.780 Controller Attributes 00:17:15.780 128-bit Host Identifier: Supported 00:17:15.780 Non-Operational Permissive Mode: Not Supported 00:17:15.780 NVM Sets: Not Supported 00:17:15.780 Read Recovery Levels: Not Supported 00:17:15.780 Endurance Groups: Not Supported 00:17:15.780 Predictable Latency Mode: Not Supported 00:17:15.780 Traffic Based Keep ALive: Not Supported 00:17:15.780 Namespace Granularity: Not Supported 00:17:15.780 SQ Associations: Not Supported 00:17:15.780 UUID List: Not Supported 00:17:15.780 Multi-Domain Subsystem: Not Supported 00:17:15.780 Fixed Capacity Management: Not Supported 00:17:15.780 Variable Capacity Management: Not Supported 00:17:15.780 Delete Endurance Group: Not Supported 00:17:15.780 Delete NVM Set: Not Supported 00:17:15.780 Extended LBA Formats Supported: Not Supported 00:17:15.780 Flexible Data Placement Supported: Not Supported 00:17:15.780 00:17:15.780 Controller Memory Buffer Support 00:17:15.780 ================================ 00:17:15.780 Supported: No 00:17:15.780 00:17:15.780 Persistent Memory Region Support 00:17:15.780 ================================ 00:17:15.780 Supported: No 00:17:15.780 00:17:15.780 Admin Command Set Attributes 00:17:15.780 ============================ 00:17:15.780 Security Send/Receive: Not Supported 00:17:15.780 Format NVM: Not Supported 00:17:15.780 Firmware Activate/Download: Not Supported 00:17:15.780 Namespace Management: Not Supported 00:17:15.780 Device Self-Test: Not Supported 00:17:15.780 Directives: Not Supported 00:17:15.780 NVMe-MI: Not Supported 00:17:15.780 Virtualization Management: Not Supported 00:17:15.780 Doorbell Buffer Config: Not Supported 00:17:15.780 Get LBA Status Capability: Not Supported 00:17:15.780 Command & Feature Lockdown Capability: Not Supported 00:17:15.780 Abort Command Limit: 4 00:17:15.780 Async Event Request Limit: 4 00:17:15.781 Number of Firmware Slots: N/A 00:17:15.781 Firmware Slot 1 Read-Only: N/A 00:17:15.781 Firmware Activation Without Reset: N/A 00:17:15.781 Multiple Update Detection Support: N/A 00:17:15.781 Firmware Update Granularity: No Information Provided 00:17:15.781 Per-Namespace SMART Log: No 00:17:15.781 Asymmetric Namespace Access Log Page: Not Supported 00:17:15.781 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:15.781 Command Effects Log Page: Supported 00:17:15.781 Get Log Page Extended Data: Supported 00:17:15.781 Telemetry Log Pages: Not Supported 00:17:15.781 Persistent Event Log Pages: Not Supported 00:17:15.781 Supported Log Pages Log Page: May Support 00:17:15.781 Commands Supported & Effects Log Page: Not Supported 00:17:15.781 Feature Identifiers & Effects Log Page:May Support 00:17:15.781 NVMe-MI Commands & Effects Log Page: May Support 00:17:15.781 Data Area 4 for Telemetry Log: Not Supported 00:17:15.781 Error Log Page Entries Supported: 128 00:17:15.781 Keep Alive: Supported 00:17:15.781 Keep Alive Granularity: 10000 ms 00:17:15.781 00:17:15.781 NVM Command Set Attributes 00:17:15.781 ========================== 00:17:15.781 Submission Queue Entry Size 00:17:15.781 Max: 64 00:17:15.781 Min: 64 00:17:15.781 Completion Queue Entry Size 00:17:15.781 Max: 16 00:17:15.781 Min: 16 00:17:15.781 Number of Namespaces: 32 00:17:15.781 Compare Command: Supported 00:17:15.781 Write Uncorrectable Command: Not Supported 00:17:15.781 Dataset Management Command: Supported 00:17:15.781 Write Zeroes Command: Supported 00:17:15.781 Set Features Save Field: Not Supported 00:17:15.781 Reservations: Not Supported 00:17:15.781 Timestamp: Not Supported 00:17:15.781 Copy: Supported 00:17:15.781 Volatile Write Cache: Present 00:17:15.781 Atomic Write Unit (Normal): 1 00:17:15.781 Atomic Write Unit (PFail): 1 00:17:15.781 Atomic Compare & Write Unit: 1 00:17:15.781 Fused Compare & Write: Supported 00:17:15.781 Scatter-Gather List 00:17:15.781 SGL Command Set: Supported (Dword aligned) 00:17:15.781 SGL Keyed: Not Supported 00:17:15.781 SGL Bit Bucket Descriptor: Not Supported 00:17:15.781 SGL Metadata Pointer: Not Supported 00:17:15.781 Oversized SGL: Not Supported 00:17:15.781 SGL Metadata Address: Not Supported 00:17:15.781 SGL Offset: Not Supported 00:17:15.781 Transport SGL Data Block: Not Supported 00:17:15.781 Replay Protected Memory Block: Not Supported 00:17:15.781 00:17:15.781 Firmware Slot Information 00:17:15.781 ========================= 00:17:15.781 Active slot: 1 00:17:15.781 Slot 1 Firmware Revision: 25.01 00:17:15.781 00:17:15.781 00:17:15.781 Commands Supported and Effects 00:17:15.781 ============================== 00:17:15.781 Admin Commands 00:17:15.781 -------------- 00:17:15.781 Get Log Page (02h): Supported 00:17:15.781 Identify (06h): Supported 00:17:15.781 Abort (08h): Supported 00:17:15.781 Set Features (09h): Supported 00:17:15.781 Get Features (0Ah): Supported 00:17:15.781 Asynchronous Event Request (0Ch): Supported 00:17:15.781 Keep Alive (18h): Supported 00:17:15.781 I/O Commands 00:17:15.781 ------------ 00:17:15.781 Flush (00h): Supported LBA-Change 00:17:15.781 Write (01h): Supported LBA-Change 00:17:15.781 Read (02h): Supported 00:17:15.781 Compare (05h): Supported 00:17:15.781 Write Zeroes (08h): Supported LBA-Change 00:17:15.781 Dataset Management (09h): Supported LBA-Change 00:17:15.781 Copy (19h): Supported LBA-Change 00:17:15.781 00:17:15.781 Error Log 00:17:15.781 ========= 00:17:15.781 00:17:15.781 Arbitration 00:17:15.781 =========== 00:17:15.781 Arbitration Burst: 1 00:17:15.781 00:17:15.781 Power Management 00:17:15.781 ================ 00:17:15.781 Number of Power States: 1 00:17:15.781 Current Power State: Power State #0 00:17:15.781 Power State #0: 00:17:15.781 Max Power: 0.00 W 00:17:15.781 Non-Operational State: Operational 00:17:15.781 Entry Latency: Not Reported 00:17:15.781 Exit Latency: Not Reported 00:17:15.781 Relative Read Throughput: 0 00:17:15.781 Relative Read Latency: 0 00:17:15.781 Relative Write Throughput: 0 00:17:15.781 Relative Write Latency: 0 00:17:15.781 Idle Power: Not Reported 00:17:15.781 Active Power: Not Reported 00:17:15.781 Non-Operational Permissive Mode: Not Supported 00:17:15.781 00:17:15.781 Health Information 00:17:15.781 ================== 00:17:15.781 Critical Warnings: 00:17:15.781 Available Spare Space: OK 00:17:15.781 Temperature: OK 00:17:15.781 Device Reliability: OK 00:17:15.781 Read Only: No 00:17:15.781 Volatile Memory Backup: OK 00:17:15.781 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:15.781 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:15.781 Available Spare: 0% 00:17:15.781 Available Sp[2024-11-26 07:26:43.614621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:15.781 [2024-11-26 07:26:43.614632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:15.781 [2024-11-26 07:26:43.614653] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:17:15.781 [2024-11-26 07:26:43.614660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.781 [2024-11-26 07:26:43.614665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.781 [2024-11-26 07:26:43.614669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.781 [2024-11-26 07:26:43.614674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:15.781 [2024-11-26 07:26:43.618165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:15.781 [2024-11-26 07:26:43.618173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:15.781 [2024-11-26 07:26:43.618951] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:15.781 [2024-11-26 07:26:43.618991] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:17:15.781 [2024-11-26 07:26:43.618995] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:17:15.781 [2024-11-26 07:26:43.619967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:15.781 [2024-11-26 07:26:43.619975] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:17:15.781 [2024-11-26 07:26:43.620028] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:15.781 [2024-11-26 07:26:43.620981] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:15.781 are Threshold: 0% 00:17:15.781 Life Percentage Used: 0% 00:17:15.781 Data Units Read: 0 00:17:15.781 Data Units Written: 0 00:17:15.781 Host Read Commands: 0 00:17:15.781 Host Write Commands: 0 00:17:15.781 Controller Busy Time: 0 minutes 00:17:15.781 Power Cycles: 0 00:17:15.781 Power On Hours: 0 hours 00:17:15.781 Unsafe Shutdowns: 0 00:17:15.781 Unrecoverable Media Errors: 0 00:17:15.781 Lifetime Error Log Entries: 0 00:17:15.781 Warning Temperature Time: 0 minutes 00:17:15.781 Critical Temperature Time: 0 minutes 00:17:15.781 00:17:15.781 Number of Queues 00:17:15.781 ================ 00:17:15.781 Number of I/O Submission Queues: 127 00:17:15.781 Number of I/O Completion Queues: 127 00:17:15.781 00:17:15.781 Active Namespaces 00:17:15.781 ================= 00:17:15.781 Namespace ID:1 00:17:15.781 Error Recovery Timeout: Unlimited 00:17:15.781 Command Set Identifier: NVM (00h) 00:17:15.781 Deallocate: Supported 00:17:15.781 Deallocated/Unwritten Error: Not Supported 00:17:15.781 Deallocated Read Value: Unknown 00:17:15.781 Deallocate in Write Zeroes: Not Supported 00:17:15.781 Deallocated Guard Field: 0xFFFF 00:17:15.781 Flush: Supported 00:17:15.781 Reservation: Supported 00:17:15.781 Namespace Sharing Capabilities: Multiple Controllers 00:17:15.781 Size (in LBAs): 131072 (0GiB) 00:17:15.781 Capacity (in LBAs): 131072 (0GiB) 00:17:15.781 Utilization (in LBAs): 131072 (0GiB) 00:17:15.781 NGUID: 7BB9C8AB213F4EA195253B1A739BFDBF 00:17:15.781 UUID: 7bb9c8ab-213f-4ea1-9525-3b1a739bfdbf 00:17:15.781 Thin Provisioning: Not Supported 00:17:15.781 Per-NS Atomic Units: Yes 00:17:15.781 Atomic Boundary Size (Normal): 0 00:17:15.781 Atomic Boundary Size (PFail): 0 00:17:15.781 Atomic Boundary Offset: 0 00:17:15.781 Maximum Single Source Range Length: 65535 00:17:15.781 Maximum Copy Length: 65535 00:17:15.781 Maximum Source Range Count: 1 00:17:15.781 NGUID/EUI64 Never Reused: No 00:17:15.781 Namespace Write Protected: No 00:17:15.781 Number of LBA Formats: 1 00:17:15.781 Current LBA Format: LBA Format #00 00:17:15.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:15.781 00:17:15.781 07:26:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:15.782 [2024-11-26 07:26:43.785786] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:21.069 Initializing NVMe Controllers 00:17:21.069 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:21.069 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:21.069 Initialization complete. Launching workers. 00:17:21.069 ======================================================== 00:17:21.069 Latency(us) 00:17:21.069 Device Information : IOPS MiB/s Average min max 00:17:21.069 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39988.11 156.20 3200.82 847.28 9770.04 00:17:21.069 ======================================================== 00:17:21.069 Total : 39988.11 156.20 3200.82 847.28 9770.04 00:17:21.069 00:17:21.069 [2024-11-26 07:26:48.803848] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:21.069 07:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:21.069 [2024-11-26 07:26:48.995660] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:26.353 Initializing NVMe Controllers 00:17:26.353 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:26.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:26.353 Initialization complete. Launching workers. 00:17:26.353 ======================================================== 00:17:26.353 Latency(us) 00:17:26.353 Device Information : IOPS MiB/s Average min max 00:17:26.353 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15891.60 62.08 8062.26 5742.48 15962.85 00:17:26.353 ======================================================== 00:17:26.353 Total : 15891.60 62.08 8062.26 5742.48 15962.85 00:17:26.353 00:17:26.353 [2024-11-26 07:26:54.030417] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:26.353 07:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:26.353 [2024-11-26 07:26:54.229328] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:31.640 [2024-11-26 07:26:59.294350] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:31.640 Initializing NVMe Controllers 00:17:31.640 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:31.640 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:31.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:31.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:31.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:31.640 Initialization complete. Launching workers. 00:17:31.640 Starting thread on core 2 00:17:31.640 Starting thread on core 3 00:17:31.640 Starting thread on core 1 00:17:31.640 07:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:31.640 [2024-11-26 07:26:59.540512] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:34.939 [2024-11-26 07:27:02.707311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:34.939 Initializing NVMe Controllers 00:17:34.939 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:34.939 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:34.939 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:34.939 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:34.939 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:34.939 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:34.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:34.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:34.939 Initialization complete. Launching workers. 00:17:34.939 Starting thread on core 1 with urgent priority queue 00:17:34.939 Starting thread on core 2 with urgent priority queue 00:17:34.939 Starting thread on core 3 with urgent priority queue 00:17:34.939 Starting thread on core 0 with urgent priority queue 00:17:34.939 SPDK bdev Controller (SPDK1 ) core 0: 13075.67 IO/s 7.65 secs/100000 ios 00:17:34.939 SPDK bdev Controller (SPDK1 ) core 1: 12108.33 IO/s 8.26 secs/100000 ios 00:17:34.939 SPDK bdev Controller (SPDK1 ) core 2: 14465.33 IO/s 6.91 secs/100000 ios 00:17:34.939 SPDK bdev Controller (SPDK1 ) core 3: 12957.00 IO/s 7.72 secs/100000 ios 00:17:34.939 ======================================================== 00:17:34.939 00:17:34.939 07:27:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:34.939 [2024-11-26 07:27:02.945363] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:34.939 Initializing NVMe Controllers 00:17:34.939 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:34.939 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:34.939 Namespace ID: 1 size: 0GB 00:17:34.939 Initialization complete. 00:17:34.939 INFO: using host memory buffer for IO 00:17:34.939 Hello world! 00:17:34.939 [2024-11-26 07:27:02.981589] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:34.939 07:27:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:35.200 [2024-11-26 07:27:03.221547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.585 Initializing NVMe Controllers 00:17:36.585 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.585 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.585 Initialization complete. Launching workers. 00:17:36.585 submit (in ns) avg, min, max = 5526.1, 2835.0, 3999230.0 00:17:36.585 complete (in ns) avg, min, max = 16878.8, 1632.5, 4000297.5 00:17:36.585 00:17:36.585 Submit histogram 00:17:36.585 ================ 00:17:36.585 Range in us Cumulative Count 00:17:36.585 2.827 - 2.840: 0.0489% ( 10) 00:17:36.585 2.840 - 2.853: 0.9885% ( 192) 00:17:36.585 2.853 - 2.867: 3.0733% ( 426) 00:17:36.585 2.867 - 2.880: 6.6947% ( 740) 00:17:36.585 2.880 - 2.893: 11.6962% ( 1022) 00:17:36.585 2.893 - 2.907: 17.4317% ( 1172) 00:17:36.585 2.907 - 2.920: 22.8834% ( 1114) 00:17:36.585 2.920 - 2.933: 28.5553% ( 1159) 00:17:36.585 2.933 - 2.947: 34.3056% ( 1175) 00:17:36.585 2.947 - 2.960: 39.2630% ( 1013) 00:17:36.585 2.960 - 2.973: 44.6560% ( 1102) 00:17:36.585 2.973 - 2.987: 50.8760% ( 1271) 00:17:36.585 2.987 - 3.000: 59.1759% ( 1696) 00:17:36.585 3.000 - 3.013: 67.6079% ( 1723) 00:17:36.585 3.013 - 3.027: 75.6533% ( 1644) 00:17:36.585 3.027 - 3.040: 82.0544% ( 1308) 00:17:36.585 3.040 - 3.053: 88.7051% ( 1359) 00:17:36.585 3.053 - 3.067: 93.5402% ( 988) 00:17:36.585 3.067 - 3.080: 96.4862% ( 602) 00:17:36.585 3.080 - 3.093: 98.0621% ( 322) 00:17:36.585 3.093 - 3.107: 98.8353% ( 158) 00:17:36.585 3.107 - 3.120: 99.2806% ( 91) 00:17:36.585 3.120 - 3.133: 99.4274% ( 30) 00:17:36.585 3.133 - 3.147: 99.4910% ( 13) 00:17:36.585 3.147 - 3.160: 99.5204% ( 6) 00:17:36.585 3.160 - 3.173: 99.5400% ( 4) 00:17:36.585 3.173 - 3.187: 99.5449% ( 1) 00:17:36.585 3.187 - 3.200: 99.5547% ( 2) 00:17:36.585 3.267 - 3.280: 99.5645% ( 2) 00:17:36.585 3.280 - 3.293: 99.5693% ( 1) 00:17:36.585 3.320 - 3.333: 99.5742% ( 1) 00:17:36.585 3.333 - 3.347: 99.5791% ( 1) 00:17:36.585 3.373 - 3.387: 99.5840% ( 1) 00:17:36.585 3.440 - 3.467: 99.5938% ( 2) 00:17:36.585 3.467 - 3.493: 99.5987% ( 1) 00:17:36.586 3.653 - 3.680: 99.6036% ( 1) 00:17:36.586 3.680 - 3.707: 99.6085% ( 1) 00:17:36.586 3.867 - 3.893: 99.6183% ( 2) 00:17:36.586 4.000 - 4.027: 99.6232% ( 1) 00:17:36.586 4.160 - 4.187: 99.6281% ( 1) 00:17:36.586 4.187 - 4.213: 99.6330% ( 1) 00:17:36.586 4.240 - 4.267: 99.6379% ( 1) 00:17:36.586 4.347 - 4.373: 99.6428% ( 1) 00:17:36.586 4.400 - 4.427: 99.6476% ( 1) 00:17:36.586 4.507 - 4.533: 99.6525% ( 1) 00:17:36.586 4.533 - 4.560: 99.6574% ( 1) 00:17:36.586 4.560 - 4.587: 99.6623% ( 1) 00:17:36.586 4.587 - 4.613: 99.6672% ( 1) 00:17:36.586 4.640 - 4.667: 99.6721% ( 1) 00:17:36.586 4.693 - 4.720: 99.6770% ( 1) 00:17:36.586 4.800 - 4.827: 99.6819% ( 1) 00:17:36.586 4.933 - 4.960: 99.6868% ( 1) 00:17:36.586 4.960 - 4.987: 99.7015% ( 3) 00:17:36.586 4.987 - 5.013: 99.7064% ( 1) 00:17:36.586 5.013 - 5.040: 99.7162% ( 2) 00:17:36.586 5.040 - 5.067: 99.7406% ( 5) 00:17:36.586 5.067 - 5.093: 99.7504% ( 2) 00:17:36.586 5.120 - 5.147: 99.7553% ( 1) 00:17:36.586 5.173 - 5.200: 99.7651% ( 2) 00:17:36.586 5.200 - 5.227: 99.7700% ( 1) 00:17:36.586 5.280 - 5.307: 99.7749% ( 1) 00:17:36.586 5.333 - 5.360: 99.7798% ( 1) 00:17:36.586 5.360 - 5.387: 99.7994% ( 4) 00:17:36.586 5.387 - 5.413: 99.8042% ( 1) 00:17:36.586 5.440 - 5.467: 99.8140% ( 2) 00:17:36.586 5.493 - 5.520: 99.8189% ( 1) 00:17:36.586 5.600 - 5.627: 99.8238% ( 1) 00:17:36.586 5.627 - 5.653: 99.8287% ( 1) 00:17:36.586 5.707 - 5.733: 99.8336% ( 1) 00:17:36.586 5.787 - 5.813: 99.8385% ( 1) 00:17:36.586 5.893 - 5.920: 99.8483% ( 2) 00:17:36.586 6.027 - 6.053: 99.8532% ( 1) 00:17:36.586 6.053 - 6.080: 99.8581% ( 1) 00:17:36.586 6.133 - 6.160: 99.8630% ( 1) 00:17:36.586 6.160 - 6.187: 99.8679% ( 1) 00:17:36.586 6.187 - 6.213: 99.8777% ( 2) 00:17:36.586 6.267 - 6.293: 99.8825% ( 1) 00:17:36.586 6.320 - 6.347: 99.8923% ( 2) 00:17:36.586 6.427 - 6.453: 99.8972% ( 1) 00:17:36.586 [2024-11-26 07:27:04.242256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.586 6.507 - 6.533: 99.9021% ( 1) 00:17:36.586 6.640 - 6.667: 99.9070% ( 1) 00:17:36.586 6.667 - 6.693: 99.9168% ( 2) 00:17:36.586 6.747 - 6.773: 99.9217% ( 1) 00:17:36.586 6.773 - 6.800: 99.9266% ( 1) 00:17:36.586 11.467 - 11.520: 99.9315% ( 1) 00:17:36.586 12.107 - 12.160: 99.9364% ( 1) 00:17:36.586 3986.773 - 4014.080: 100.0000% ( 13) 00:17:36.586 00:17:36.586 Complete histogram 00:17:36.586 ================== 00:17:36.586 Range in us Cumulative Count 00:17:36.586 1.627 - 1.633: 0.0049% ( 1) 00:17:36.586 1.633 - 1.640: 0.3719% ( 75) 00:17:36.586 1.640 - 1.647: 0.6509% ( 57) 00:17:36.586 1.647 - 1.653: 0.7536% ( 21) 00:17:36.586 1.653 - 1.660: 0.8271% ( 15) 00:17:36.586 1.660 - 1.667: 0.9347% ( 22) 00:17:36.586 1.667 - 1.673: 0.9739% ( 8) 00:17:36.586 1.673 - 1.680: 0.9885% ( 3) 00:17:36.586 1.680 - 1.687: 11.3732% ( 2122) 00:17:36.586 1.687 - 1.693: 45.8109% ( 7037) 00:17:36.586 1.693 - 1.700: 54.0912% ( 1692) 00:17:36.586 1.700 - 1.707: 64.0697% ( 2039) 00:17:36.586 1.707 - 1.720: 76.8278% ( 2607) 00:17:36.586 1.720 - 1.733: 82.8619% ( 1233) 00:17:36.586 1.733 - 1.747: 84.3056% ( 295) 00:17:36.586 1.747 - 1.760: 89.8209% ( 1127) 00:17:36.586 1.760 - 1.773: 95.1111% ( 1081) 00:17:36.586 1.773 - 1.787: 97.8712% ( 564) 00:17:36.586 1.787 - 1.800: 99.0800% ( 247) 00:17:36.586 1.800 - 1.813: 99.4421% ( 74) 00:17:36.586 1.813 - 1.827: 99.4813% ( 8) 00:17:36.586 2.013 - 2.027: 99.4862% ( 1) 00:17:36.586 3.573 - 3.600: 99.4910% ( 1) 00:17:36.586 3.627 - 3.653: 99.4959% ( 1) 00:17:36.586 3.733 - 3.760: 99.5106% ( 3) 00:17:36.586 3.760 - 3.787: 99.5155% ( 1) 00:17:36.586 3.813 - 3.840: 99.5204% ( 1) 00:17:36.586 3.867 - 3.893: 99.5253% ( 1) 00:17:36.586 3.920 - 3.947: 99.5302% ( 1) 00:17:36.586 3.947 - 3.973: 99.5351% ( 1) 00:17:36.586 4.027 - 4.053: 99.5400% ( 1) 00:17:36.586 4.053 - 4.080: 99.5449% ( 1) 00:17:36.586 4.133 - 4.160: 99.5498% ( 1) 00:17:36.586 4.187 - 4.213: 99.5547% ( 1) 00:17:36.586 4.347 - 4.373: 99.5596% ( 1) 00:17:36.586 4.453 - 4.480: 99.5645% ( 1) 00:17:36.586 4.533 - 4.560: 99.5693% ( 1) 00:17:36.586 4.613 - 4.640: 99.5742% ( 1) 00:17:36.586 4.640 - 4.667: 99.5791% ( 1) 00:17:36.586 4.907 - 4.933: 99.5889% ( 2) 00:17:36.586 8.267 - 8.320: 99.5938% ( 1) 00:17:36.586 9.013 - 9.067: 99.5987% ( 1) 00:17:36.586 9.440 - 9.493: 99.6036% ( 1) 00:17:36.586 9.813 - 9.867: 99.6085% ( 1) 00:17:36.586 38.613 - 38.827: 99.6134% ( 1) 00:17:36.586 110.080 - 110.933: 99.6183% ( 1) 00:17:36.586 2225.493 - 2239.147: 99.6232% ( 1) 00:17:36.586 3986.773 - 4014.080: 100.0000% ( 77) 00:17:36.586 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:36.586 [ 00:17:36.586 { 00:17:36.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:36.586 "subtype": "Discovery", 00:17:36.586 "listen_addresses": [], 00:17:36.586 "allow_any_host": true, 00:17:36.586 "hosts": [] 00:17:36.586 }, 00:17:36.586 { 00:17:36.586 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:36.586 "subtype": "NVMe", 00:17:36.586 "listen_addresses": [ 00:17:36.586 { 00:17:36.586 "trtype": "VFIOUSER", 00:17:36.586 "adrfam": "IPv4", 00:17:36.586 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:36.586 "trsvcid": "0" 00:17:36.586 } 00:17:36.586 ], 00:17:36.586 "allow_any_host": true, 00:17:36.586 "hosts": [], 00:17:36.586 "serial_number": "SPDK1", 00:17:36.586 "model_number": "SPDK bdev Controller", 00:17:36.586 "max_namespaces": 32, 00:17:36.586 "min_cntlid": 1, 00:17:36.586 "max_cntlid": 65519, 00:17:36.586 "namespaces": [ 00:17:36.586 { 00:17:36.586 "nsid": 1, 00:17:36.586 "bdev_name": "Malloc1", 00:17:36.586 "name": "Malloc1", 00:17:36.586 "nguid": "7BB9C8AB213F4EA195253B1A739BFDBF", 00:17:36.586 "uuid": "7bb9c8ab-213f-4ea1-9525-3b1a739bfdbf" 00:17:36.586 } 00:17:36.586 ] 00:17:36.586 }, 00:17:36.586 { 00:17:36.586 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:36.586 "subtype": "NVMe", 00:17:36.586 "listen_addresses": [ 00:17:36.586 { 00:17:36.586 "trtype": "VFIOUSER", 00:17:36.586 "adrfam": "IPv4", 00:17:36.586 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:36.586 "trsvcid": "0" 00:17:36.586 } 00:17:36.586 ], 00:17:36.586 "allow_any_host": true, 00:17:36.586 "hosts": [], 00:17:36.586 "serial_number": "SPDK2", 00:17:36.586 "model_number": "SPDK bdev Controller", 00:17:36.586 "max_namespaces": 32, 00:17:36.586 "min_cntlid": 1, 00:17:36.586 "max_cntlid": 65519, 00:17:36.586 "namespaces": [ 00:17:36.586 { 00:17:36.586 "nsid": 1, 00:17:36.586 "bdev_name": "Malloc2", 00:17:36.586 "name": "Malloc2", 00:17:36.586 "nguid": "73DF0DAEF0304672AB19C3625F996E7B", 00:17:36.586 "uuid": "73df0dae-f030-4672-ab19-c3625f996e7b" 00:17:36.586 } 00:17:36.586 ] 00:17:36.586 } 00:17:36.586 ] 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1413694 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:36.586 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:36.587 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:36.587 [2024-11-26 07:27:04.619520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:36.847 Malloc3 00:17:36.847 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:36.847 [2024-11-26 07:27:04.870228] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:36.847 07:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:36.847 Asynchronous Event Request test 00:17:36.847 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.847 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:36.847 Registering asynchronous event callbacks... 00:17:36.847 Starting namespace attribute notice tests for all controllers... 00:17:36.847 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:36.847 aer_cb - Changed Namespace 00:17:36.847 Cleaning up... 00:17:37.108 [ 00:17:37.108 { 00:17:37.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:37.108 "subtype": "Discovery", 00:17:37.108 "listen_addresses": [], 00:17:37.108 "allow_any_host": true, 00:17:37.108 "hosts": [] 00:17:37.108 }, 00:17:37.108 { 00:17:37.108 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:37.108 "subtype": "NVMe", 00:17:37.108 "listen_addresses": [ 00:17:37.108 { 00:17:37.108 "trtype": "VFIOUSER", 00:17:37.108 "adrfam": "IPv4", 00:17:37.108 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:37.108 "trsvcid": "0" 00:17:37.108 } 00:17:37.108 ], 00:17:37.108 "allow_any_host": true, 00:17:37.108 "hosts": [], 00:17:37.108 "serial_number": "SPDK1", 00:17:37.108 "model_number": "SPDK bdev Controller", 00:17:37.108 "max_namespaces": 32, 00:17:37.108 "min_cntlid": 1, 00:17:37.108 "max_cntlid": 65519, 00:17:37.108 "namespaces": [ 00:17:37.108 { 00:17:37.108 "nsid": 1, 00:17:37.108 "bdev_name": "Malloc1", 00:17:37.108 "name": "Malloc1", 00:17:37.108 "nguid": "7BB9C8AB213F4EA195253B1A739BFDBF", 00:17:37.108 "uuid": "7bb9c8ab-213f-4ea1-9525-3b1a739bfdbf" 00:17:37.108 }, 00:17:37.108 { 00:17:37.108 "nsid": 2, 00:17:37.108 "bdev_name": "Malloc3", 00:17:37.108 "name": "Malloc3", 00:17:37.108 "nguid": "8B98160DC5684B77AD682C4E454A5045", 00:17:37.108 "uuid": "8b98160d-c568-4b77-ad68-2c4e454a5045" 00:17:37.108 } 00:17:37.108 ] 00:17:37.108 }, 00:17:37.108 { 00:17:37.109 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:37.109 "subtype": "NVMe", 00:17:37.109 "listen_addresses": [ 00:17:37.109 { 00:17:37.109 "trtype": "VFIOUSER", 00:17:37.109 "adrfam": "IPv4", 00:17:37.109 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:37.109 "trsvcid": "0" 00:17:37.109 } 00:17:37.109 ], 00:17:37.109 "allow_any_host": true, 00:17:37.109 "hosts": [], 00:17:37.109 "serial_number": "SPDK2", 00:17:37.109 "model_number": "SPDK bdev Controller", 00:17:37.109 "max_namespaces": 32, 00:17:37.109 "min_cntlid": 1, 00:17:37.109 "max_cntlid": 65519, 00:17:37.109 "namespaces": [ 00:17:37.109 { 00:17:37.109 "nsid": 1, 00:17:37.109 "bdev_name": "Malloc2", 00:17:37.109 "name": "Malloc2", 00:17:37.109 "nguid": "73DF0DAEF0304672AB19C3625F996E7B", 00:17:37.109 "uuid": "73df0dae-f030-4672-ab19-c3625f996e7b" 00:17:37.109 } 00:17:37.109 ] 00:17:37.109 } 00:17:37.109 ] 00:17:37.109 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1413694 00:17:37.109 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:37.109 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:37.109 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:37.109 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:37.109 [2024-11-26 07:27:05.112282] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:17:37.109 [2024-11-26 07:27:05.112357] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1413907 ] 00:17:37.109 [2024-11-26 07:27:05.152385] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:37.109 [2024-11-26 07:27:05.157585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:37.109 [2024-11-26 07:27:05.157605] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2fef530000 00:17:37.109 [2024-11-26 07:27:05.158582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.159593] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.160596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.161599] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.162607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.163609] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.164621] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.165623] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:37.109 [2024-11-26 07:27:05.166630] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:37.109 [2024-11-26 07:27:05.166638] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2fef525000 00:17:37.109 [2024-11-26 07:27:05.167549] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:37.109 [2024-11-26 07:27:05.176923] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:37.109 [2024-11-26 07:27:05.176945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:17:37.109 [2024-11-26 07:27:05.182010] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:37.109 [2024-11-26 07:27:05.182043] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:37.109 [2024-11-26 07:27:05.182102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:17:37.109 [2024-11-26 07:27:05.182113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:17:37.109 [2024-11-26 07:27:05.182118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:17:37.109 [2024-11-26 07:27:05.183014] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:37.109 [2024-11-26 07:27:05.183022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:17:37.109 [2024-11-26 07:27:05.183028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:17:37.109 [2024-11-26 07:27:05.184018] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:37.109 [2024-11-26 07:27:05.184026] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:17:37.109 [2024-11-26 07:27:05.184031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.185025] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:37.109 [2024-11-26 07:27:05.185032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.186037] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:37.109 [2024-11-26 07:27:05.186045] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:17:37.109 [2024-11-26 07:27:05.186048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.186053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.186161] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:17:37.109 [2024-11-26 07:27:05.186164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.186168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:37.109 [2024-11-26 07:27:05.187040] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:37.109 [2024-11-26 07:27:05.188050] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:37.109 [2024-11-26 07:27:05.189062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:37.109 [2024-11-26 07:27:05.190063] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.109 [2024-11-26 07:27:05.190095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:37.109 [2024-11-26 07:27:05.191067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:37.109 [2024-11-26 07:27:05.191074] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:37.109 [2024-11-26 07:27:05.191077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:17:37.109 [2024-11-26 07:27:05.191092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:17:37.109 [2024-11-26 07:27:05.191097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:17:37.109 [2024-11-26 07:27:05.191106] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:37.109 [2024-11-26 07:27:05.191110] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:37.109 [2024-11-26 07:27:05.191113] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.109 [2024-11-26 07:27:05.191124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:37.109 [2024-11-26 07:27:05.199165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:37.109 [2024-11-26 07:27:05.199175] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:17:37.109 [2024-11-26 07:27:05.199179] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:17:37.109 [2024-11-26 07:27:05.199182] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:17:37.109 [2024-11-26 07:27:05.199185] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:37.109 [2024-11-26 07:27:05.199190] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:17:37.109 [2024-11-26 07:27:05.199194] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:17:37.109 [2024-11-26 07:27:05.199197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:17:37.109 [2024-11-26 07:27:05.199204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:17:37.109 [2024-11-26 07:27:05.199211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:37.371 [2024-11-26 07:27:05.207165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:37.371 [2024-11-26 07:27:05.207175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.372 [2024-11-26 07:27:05.207182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.372 [2024-11-26 07:27:05.207188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.372 [2024-11-26 07:27:05.207194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.372 [2024-11-26 07:27:05.207197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.207202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.207209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.215171] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:17:37.372 [2024-11-26 07:27:05.215175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.215180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.215185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.215193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.223164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.223212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.223218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.223224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:37.372 [2024-11-26 07:27:05.223227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:37.372 [2024-11-26 07:27:05.223230] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.372 [2024-11-26 07:27:05.223234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.231164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.231176] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:17:37.372 [2024-11-26 07:27:05.231183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.231188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.231193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:37.372 [2024-11-26 07:27:05.231196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:37.372 [2024-11-26 07:27:05.231199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.372 [2024-11-26 07:27:05.231203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.239165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.239179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.239185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.239191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:37.372 [2024-11-26 07:27:05.239194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:37.372 [2024-11-26 07:27:05.239196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.372 [2024-11-26 07:27:05.239201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.247163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.247171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247175] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247198] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:17:37.372 [2024-11-26 07:27:05.247202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:17:37.372 [2024-11-26 07:27:05.247205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:17:37.372 [2024-11-26 07:27:05.247218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.255163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.255174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.263164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.263174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.271164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.271174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.279164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:37.372 [2024-11-26 07:27:05.279176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:37.372 [2024-11-26 07:27:05.279180] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:37.372 [2024-11-26 07:27:05.279182] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:37.372 [2024-11-26 07:27:05.279185] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:37.372 [2024-11-26 07:27:05.279187] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:37.372 [2024-11-26 07:27:05.279192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:37.372 [2024-11-26 07:27:05.279198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:37.372 [2024-11-26 07:27:05.279201] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:37.372 [2024-11-26 07:27:05.279203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.372 [2024-11-26 07:27:05.279207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:37.372 [2024-11-26 07:27:05.279213] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:37.372 [2024-11-26 07:27:05.279216] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:37.373 [2024-11-26 07:27:05.279218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.373 [2024-11-26 07:27:05.279224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:37.373 [2024-11-26 07:27:05.279229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:37.373 [2024-11-26 07:27:05.279232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:37.373 [2024-11-26 07:27:05.279235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:37.373 [2024-11-26 07:27:05.279239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:37.373 [2024-11-26 07:27:05.287163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:37.373 [2024-11-26 07:27:05.287174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:37.373 [2024-11-26 07:27:05.287182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:37.373 [2024-11-26 07:27:05.287187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:37.373 ===================================================== 00:17:37.373 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.373 ===================================================== 00:17:37.373 Controller Capabilities/Features 00:17:37.373 ================================ 00:17:37.373 Vendor ID: 4e58 00:17:37.373 Subsystem Vendor ID: 4e58 00:17:37.373 Serial Number: SPDK2 00:17:37.373 Model Number: SPDK bdev Controller 00:17:37.373 Firmware Version: 25.01 00:17:37.373 Recommended Arb Burst: 6 00:17:37.373 IEEE OUI Identifier: 8d 6b 50 00:17:37.373 Multi-path I/O 00:17:37.373 May have multiple subsystem ports: Yes 00:17:37.373 May have multiple controllers: Yes 00:17:37.373 Associated with SR-IOV VF: No 00:17:37.373 Max Data Transfer Size: 131072 00:17:37.373 Max Number of Namespaces: 32 00:17:37.373 Max Number of I/O Queues: 127 00:17:37.373 NVMe Specification Version (VS): 1.3 00:17:37.373 NVMe Specification Version (Identify): 1.3 00:17:37.373 Maximum Queue Entries: 256 00:17:37.373 Contiguous Queues Required: Yes 00:17:37.373 Arbitration Mechanisms Supported 00:17:37.373 Weighted Round Robin: Not Supported 00:17:37.373 Vendor Specific: Not Supported 00:17:37.373 Reset Timeout: 15000 ms 00:17:37.373 Doorbell Stride: 4 bytes 00:17:37.373 NVM Subsystem Reset: Not Supported 00:17:37.373 Command Sets Supported 00:17:37.373 NVM Command Set: Supported 00:17:37.373 Boot Partition: Not Supported 00:17:37.373 Memory Page Size Minimum: 4096 bytes 00:17:37.373 Memory Page Size Maximum: 4096 bytes 00:17:37.373 Persistent Memory Region: Not Supported 00:17:37.373 Optional Asynchronous Events Supported 00:17:37.373 Namespace Attribute Notices: Supported 00:17:37.373 Firmware Activation Notices: Not Supported 00:17:37.373 ANA Change Notices: Not Supported 00:17:37.373 PLE Aggregate Log Change Notices: Not Supported 00:17:37.373 LBA Status Info Alert Notices: Not Supported 00:17:37.373 EGE Aggregate Log Change Notices: Not Supported 00:17:37.373 Normal NVM Subsystem Shutdown event: Not Supported 00:17:37.373 Zone Descriptor Change Notices: Not Supported 00:17:37.373 Discovery Log Change Notices: Not Supported 00:17:37.373 Controller Attributes 00:17:37.373 128-bit Host Identifier: Supported 00:17:37.373 Non-Operational Permissive Mode: Not Supported 00:17:37.373 NVM Sets: Not Supported 00:17:37.373 Read Recovery Levels: Not Supported 00:17:37.373 Endurance Groups: Not Supported 00:17:37.373 Predictable Latency Mode: Not Supported 00:17:37.373 Traffic Based Keep ALive: Not Supported 00:17:37.373 Namespace Granularity: Not Supported 00:17:37.373 SQ Associations: Not Supported 00:17:37.373 UUID List: Not Supported 00:17:37.373 Multi-Domain Subsystem: Not Supported 00:17:37.373 Fixed Capacity Management: Not Supported 00:17:37.373 Variable Capacity Management: Not Supported 00:17:37.373 Delete Endurance Group: Not Supported 00:17:37.373 Delete NVM Set: Not Supported 00:17:37.373 Extended LBA Formats Supported: Not Supported 00:17:37.373 Flexible Data Placement Supported: Not Supported 00:17:37.373 00:17:37.373 Controller Memory Buffer Support 00:17:37.373 ================================ 00:17:37.373 Supported: No 00:17:37.373 00:17:37.373 Persistent Memory Region Support 00:17:37.373 ================================ 00:17:37.373 Supported: No 00:17:37.373 00:17:37.373 Admin Command Set Attributes 00:17:37.373 ============================ 00:17:37.373 Security Send/Receive: Not Supported 00:17:37.373 Format NVM: Not Supported 00:17:37.373 Firmware Activate/Download: Not Supported 00:17:37.373 Namespace Management: Not Supported 00:17:37.373 Device Self-Test: Not Supported 00:17:37.373 Directives: Not Supported 00:17:37.373 NVMe-MI: Not Supported 00:17:37.373 Virtualization Management: Not Supported 00:17:37.373 Doorbell Buffer Config: Not Supported 00:17:37.373 Get LBA Status Capability: Not Supported 00:17:37.373 Command & Feature Lockdown Capability: Not Supported 00:17:37.373 Abort Command Limit: 4 00:17:37.373 Async Event Request Limit: 4 00:17:37.373 Number of Firmware Slots: N/A 00:17:37.373 Firmware Slot 1 Read-Only: N/A 00:17:37.373 Firmware Activation Without Reset: N/A 00:17:37.373 Multiple Update Detection Support: N/A 00:17:37.373 Firmware Update Granularity: No Information Provided 00:17:37.373 Per-Namespace SMART Log: No 00:17:37.373 Asymmetric Namespace Access Log Page: Not Supported 00:17:37.373 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:37.373 Command Effects Log Page: Supported 00:17:37.373 Get Log Page Extended Data: Supported 00:17:37.373 Telemetry Log Pages: Not Supported 00:17:37.373 Persistent Event Log Pages: Not Supported 00:17:37.373 Supported Log Pages Log Page: May Support 00:17:37.373 Commands Supported & Effects Log Page: Not Supported 00:17:37.373 Feature Identifiers & Effects Log Page:May Support 00:17:37.373 NVMe-MI Commands & Effects Log Page: May Support 00:17:37.373 Data Area 4 for Telemetry Log: Not Supported 00:17:37.373 Error Log Page Entries Supported: 128 00:17:37.373 Keep Alive: Supported 00:17:37.373 Keep Alive Granularity: 10000 ms 00:17:37.373 00:17:37.373 NVM Command Set Attributes 00:17:37.373 ========================== 00:17:37.373 Submission Queue Entry Size 00:17:37.373 Max: 64 00:17:37.373 Min: 64 00:17:37.373 Completion Queue Entry Size 00:17:37.374 Max: 16 00:17:37.374 Min: 16 00:17:37.374 Number of Namespaces: 32 00:17:37.374 Compare Command: Supported 00:17:37.374 Write Uncorrectable Command: Not Supported 00:17:37.374 Dataset Management Command: Supported 00:17:37.374 Write Zeroes Command: Supported 00:17:37.374 Set Features Save Field: Not Supported 00:17:37.374 Reservations: Not Supported 00:17:37.374 Timestamp: Not Supported 00:17:37.374 Copy: Supported 00:17:37.374 Volatile Write Cache: Present 00:17:37.374 Atomic Write Unit (Normal): 1 00:17:37.374 Atomic Write Unit (PFail): 1 00:17:37.374 Atomic Compare & Write Unit: 1 00:17:37.374 Fused Compare & Write: Supported 00:17:37.374 Scatter-Gather List 00:17:37.374 SGL Command Set: Supported (Dword aligned) 00:17:37.374 SGL Keyed: Not Supported 00:17:37.374 SGL Bit Bucket Descriptor: Not Supported 00:17:37.374 SGL Metadata Pointer: Not Supported 00:17:37.374 Oversized SGL: Not Supported 00:17:37.374 SGL Metadata Address: Not Supported 00:17:37.374 SGL Offset: Not Supported 00:17:37.374 Transport SGL Data Block: Not Supported 00:17:37.374 Replay Protected Memory Block: Not Supported 00:17:37.374 00:17:37.374 Firmware Slot Information 00:17:37.374 ========================= 00:17:37.374 Active slot: 1 00:17:37.374 Slot 1 Firmware Revision: 25.01 00:17:37.374 00:17:37.374 00:17:37.374 Commands Supported and Effects 00:17:37.374 ============================== 00:17:37.374 Admin Commands 00:17:37.374 -------------- 00:17:37.374 Get Log Page (02h): Supported 00:17:37.374 Identify (06h): Supported 00:17:37.374 Abort (08h): Supported 00:17:37.374 Set Features (09h): Supported 00:17:37.374 Get Features (0Ah): Supported 00:17:37.374 Asynchronous Event Request (0Ch): Supported 00:17:37.374 Keep Alive (18h): Supported 00:17:37.374 I/O Commands 00:17:37.374 ------------ 00:17:37.374 Flush (00h): Supported LBA-Change 00:17:37.374 Write (01h): Supported LBA-Change 00:17:37.374 Read (02h): Supported 00:17:37.374 Compare (05h): Supported 00:17:37.374 Write Zeroes (08h): Supported LBA-Change 00:17:37.374 Dataset Management (09h): Supported LBA-Change 00:17:37.374 Copy (19h): Supported LBA-Change 00:17:37.374 00:17:37.374 Error Log 00:17:37.374 ========= 00:17:37.374 00:17:37.374 Arbitration 00:17:37.374 =========== 00:17:37.374 Arbitration Burst: 1 00:17:37.374 00:17:37.374 Power Management 00:17:37.374 ================ 00:17:37.374 Number of Power States: 1 00:17:37.374 Current Power State: Power State #0 00:17:37.374 Power State #0: 00:17:37.374 Max Power: 0.00 W 00:17:37.374 Non-Operational State: Operational 00:17:37.374 Entry Latency: Not Reported 00:17:37.374 Exit Latency: Not Reported 00:17:37.374 Relative Read Throughput: 0 00:17:37.374 Relative Read Latency: 0 00:17:37.374 Relative Write Throughput: 0 00:17:37.374 Relative Write Latency: 0 00:17:37.374 Idle Power: Not Reported 00:17:37.374 Active Power: Not Reported 00:17:37.374 Non-Operational Permissive Mode: Not Supported 00:17:37.374 00:17:37.374 Health Information 00:17:37.374 ================== 00:17:37.374 Critical Warnings: 00:17:37.374 Available Spare Space: OK 00:17:37.374 Temperature: OK 00:17:37.374 Device Reliability: OK 00:17:37.374 Read Only: No 00:17:37.374 Volatile Memory Backup: OK 00:17:37.374 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:37.374 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:37.374 Available Spare: 0% 00:17:37.374 Available Sp[2024-11-26 07:27:05.287262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:37.374 [2024-11-26 07:27:05.295163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:37.374 [2024-11-26 07:27:05.295187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:17:37.374 [2024-11-26 07:27:05.295195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.374 [2024-11-26 07:27:05.295199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.374 [2024-11-26 07:27:05.295204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.374 [2024-11-26 07:27:05.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.374 [2024-11-26 07:27:05.295239] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:37.374 [2024-11-26 07:27:05.295247] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:37.374 [2024-11-26 07:27:05.296240] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.374 [2024-11-26 07:27:05.296280] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:17:37.374 [2024-11-26 07:27:05.296285] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:17:37.374 [2024-11-26 07:27:05.297243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:37.374 [2024-11-26 07:27:05.297252] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:17:37.374 [2024-11-26 07:27:05.297294] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:37.374 [2024-11-26 07:27:05.298259] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:37.374 are Threshold: 0% 00:17:37.374 Life Percentage Used: 0% 00:17:37.374 Data Units Read: 0 00:17:37.374 Data Units Written: 0 00:17:37.374 Host Read Commands: 0 00:17:37.374 Host Write Commands: 0 00:17:37.374 Controller Busy Time: 0 minutes 00:17:37.374 Power Cycles: 0 00:17:37.374 Power On Hours: 0 hours 00:17:37.374 Unsafe Shutdowns: 0 00:17:37.374 Unrecoverable Media Errors: 0 00:17:37.374 Lifetime Error Log Entries: 0 00:17:37.374 Warning Temperature Time: 0 minutes 00:17:37.374 Critical Temperature Time: 0 minutes 00:17:37.374 00:17:37.374 Number of Queues 00:17:37.374 ================ 00:17:37.374 Number of I/O Submission Queues: 127 00:17:37.374 Number of I/O Completion Queues: 127 00:17:37.374 00:17:37.374 Active Namespaces 00:17:37.374 ================= 00:17:37.374 Namespace ID:1 00:17:37.374 Error Recovery Timeout: Unlimited 00:17:37.374 Command Set Identifier: NVM (00h) 00:17:37.374 Deallocate: Supported 00:17:37.374 Deallocated/Unwritten Error: Not Supported 00:17:37.374 Deallocated Read Value: Unknown 00:17:37.374 Deallocate in Write Zeroes: Not Supported 00:17:37.374 Deallocated Guard Field: 0xFFFF 00:17:37.374 Flush: Supported 00:17:37.374 Reservation: Supported 00:17:37.375 Namespace Sharing Capabilities: Multiple Controllers 00:17:37.375 Size (in LBAs): 131072 (0GiB) 00:17:37.375 Capacity (in LBAs): 131072 (0GiB) 00:17:37.375 Utilization (in LBAs): 131072 (0GiB) 00:17:37.375 NGUID: 73DF0DAEF0304672AB19C3625F996E7B 00:17:37.375 UUID: 73df0dae-f030-4672-ab19-c3625f996e7b 00:17:37.375 Thin Provisioning: Not Supported 00:17:37.375 Per-NS Atomic Units: Yes 00:17:37.375 Atomic Boundary Size (Normal): 0 00:17:37.375 Atomic Boundary Size (PFail): 0 00:17:37.375 Atomic Boundary Offset: 0 00:17:37.375 Maximum Single Source Range Length: 65535 00:17:37.375 Maximum Copy Length: 65535 00:17:37.375 Maximum Source Range Count: 1 00:17:37.375 NGUID/EUI64 Never Reused: No 00:17:37.375 Namespace Write Protected: No 00:17:37.375 Number of LBA Formats: 1 00:17:37.375 Current LBA Format: LBA Format #00 00:17:37.375 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:37.375 00:17:37.375 07:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:37.636 [2024-11-26 07:27:05.488522] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.923 Initializing NVMe Controllers 00:17:42.923 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.923 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:42.923 Initialization complete. Launching workers. 00:17:42.924 ======================================================== 00:17:42.924 Latency(us) 00:17:42.924 Device Information : IOPS MiB/s Average min max 00:17:42.924 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39982.70 156.18 3201.05 845.08 8752.97 00:17:42.924 ======================================================== 00:17:42.924 Total : 39982.70 156.18 3201.05 845.08 8752.97 00:17:42.924 00:17:42.924 [2024-11-26 07:27:10.594354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.924 07:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:42.924 [2024-11-26 07:27:10.785954] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:48.206 Initializing NVMe Controllers 00:17:48.206 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:48.206 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:48.206 Initialization complete. Launching workers. 00:17:48.206 ======================================================== 00:17:48.206 Latency(us) 00:17:48.206 Device Information : IOPS MiB/s Average min max 00:17:48.206 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40052.80 156.46 3195.74 847.57 7703.35 00:17:48.206 ======================================================== 00:17:48.206 Total : 40052.80 156.46 3195.74 847.57 7703.35 00:17:48.206 00:17:48.206 [2024-11-26 07:27:15.805858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.206 07:27:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:48.206 [2024-11-26 07:27:16.020078] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:53.489 [2024-11-26 07:27:21.162255] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:53.489 Initializing NVMe Controllers 00:17:53.489 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:53.489 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:53.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:53.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:53.489 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:53.489 Initialization complete. Launching workers. 00:17:53.489 Starting thread on core 2 00:17:53.489 Starting thread on core 3 00:17:53.489 Starting thread on core 1 00:17:53.489 07:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:53.489 [2024-11-26 07:27:21.407296] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:56.904 [2024-11-26 07:27:24.484156] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:56.904 Initializing NVMe Controllers 00:17:56.904 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:56.904 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:56.904 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:56.904 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:56.904 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:56.904 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:56.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:56.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:56.904 Initialization complete. Launching workers. 00:17:56.904 Starting thread on core 1 with urgent priority queue 00:17:56.904 Starting thread on core 2 with urgent priority queue 00:17:56.904 Starting thread on core 3 with urgent priority queue 00:17:56.904 Starting thread on core 0 with urgent priority queue 00:17:56.904 SPDK bdev Controller (SPDK2 ) core 0: 11018.67 IO/s 9.08 secs/100000 ios 00:17:56.904 SPDK bdev Controller (SPDK2 ) core 1: 6445.33 IO/s 15.52 secs/100000 ios 00:17:56.904 SPDK bdev Controller (SPDK2 ) core 2: 6329.67 IO/s 15.80 secs/100000 ios 00:17:56.904 SPDK bdev Controller (SPDK2 ) core 3: 10819.00 IO/s 9.24 secs/100000 ios 00:17:56.904 ======================================================== 00:17:56.904 00:17:56.904 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:56.904 [2024-11-26 07:27:24.723200] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:56.904 Initializing NVMe Controllers 00:17:56.904 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:56.904 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:56.904 Namespace ID: 1 size: 0GB 00:17:56.904 Initialization complete. 00:17:56.904 INFO: using host memory buffer for IO 00:17:56.904 Hello world! 00:17:56.904 [2024-11-26 07:27:24.734275] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:56.904 07:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:56.904 [2024-11-26 07:27:24.970593] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:58.288 Initializing NVMe Controllers 00:17:58.288 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:58.288 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:58.288 Initialization complete. Launching workers. 00:17:58.288 submit (in ns) avg, min, max = 5892.1, 2819.2, 4022827.5 00:17:58.288 complete (in ns) avg, min, max = 16673.7, 1638.3, 6989463.3 00:17:58.288 00:17:58.288 Submit histogram 00:17:58.288 ================ 00:17:58.288 Range in us Cumulative Count 00:17:58.288 2.813 - 2.827: 0.3316% ( 68) 00:17:58.288 2.827 - 2.840: 1.3703% ( 213) 00:17:58.288 2.840 - 2.853: 3.9452% ( 528) 00:17:58.288 2.853 - 2.867: 8.6804% ( 971) 00:17:58.288 2.867 - 2.880: 13.2644% ( 940) 00:17:58.288 2.880 - 2.893: 18.8189% ( 1139) 00:17:58.288 2.893 - 2.907: 24.6757% ( 1201) 00:17:58.288 2.907 - 2.920: 30.3131% ( 1156) 00:17:58.288 2.920 - 2.933: 36.0382% ( 1174) 00:17:58.288 2.933 - 2.947: 40.7100% ( 958) 00:17:58.288 2.947 - 2.960: 45.4647% ( 975) 00:17:58.288 2.960 - 2.973: 50.9851% ( 1132) 00:17:58.288 2.973 - 2.987: 58.2756% ( 1495) 00:17:58.288 2.987 - 3.000: 67.3949% ( 1870) 00:17:58.288 3.000 - 3.013: 75.7973% ( 1723) 00:17:58.288 3.013 - 3.027: 83.4683% ( 1573) 00:17:58.288 3.027 - 3.040: 89.8859% ( 1316) 00:17:58.288 3.040 - 3.053: 94.0359% ( 851) 00:17:58.288 3.053 - 3.067: 96.8448% ( 576) 00:17:58.288 3.067 - 3.080: 98.3371% ( 306) 00:17:58.288 3.080 - 3.093: 99.0637% ( 149) 00:17:58.289 3.093 - 3.107: 99.4246% ( 74) 00:17:58.289 3.107 - 3.120: 99.5416% ( 24) 00:17:58.289 3.120 - 3.133: 99.6001% ( 12) 00:17:58.289 3.133 - 3.147: 99.6050% ( 1) 00:17:58.289 3.240 - 3.253: 99.6099% ( 1) 00:17:58.289 3.253 - 3.267: 99.6147% ( 1) 00:17:58.289 3.493 - 3.520: 99.6196% ( 1) 00:17:58.289 3.520 - 3.547: 99.6245% ( 1) 00:17:58.289 3.680 - 3.707: 99.6294% ( 1) 00:17:58.289 4.240 - 4.267: 99.6343% ( 1) 00:17:58.289 4.267 - 4.293: 99.6391% ( 1) 00:17:58.289 4.347 - 4.373: 99.6440% ( 1) 00:17:58.289 4.480 - 4.507: 99.6489% ( 1) 00:17:58.289 4.533 - 4.560: 99.6586% ( 2) 00:17:58.289 4.560 - 4.587: 99.6635% ( 1) 00:17:58.289 4.613 - 4.640: 99.6733% ( 2) 00:17:58.289 4.693 - 4.720: 99.6781% ( 1) 00:17:58.289 4.747 - 4.773: 99.6879% ( 2) 00:17:58.289 4.800 - 4.827: 99.6928% ( 1) 00:17:58.289 4.827 - 4.853: 99.6976% ( 1) 00:17:58.289 4.880 - 4.907: 99.7123% ( 3) 00:17:58.289 4.907 - 4.933: 99.7172% ( 1) 00:17:58.289 4.933 - 4.960: 99.7220% ( 1) 00:17:58.289 4.987 - 5.013: 99.7269% ( 1) 00:17:58.289 5.040 - 5.067: 99.7367% ( 2) 00:17:58.289 5.120 - 5.147: 99.7464% ( 2) 00:17:58.289 5.147 - 5.173: 99.7562% ( 2) 00:17:58.289 5.280 - 5.307: 99.7610% ( 1) 00:17:58.289 5.333 - 5.360: 99.7708% ( 2) 00:17:58.289 5.547 - 5.573: 99.7806% ( 2) 00:17:58.289 5.600 - 5.627: 99.7854% ( 1) 00:17:58.289 5.653 - 5.680: 99.7903% ( 1) 00:17:58.289 5.680 - 5.707: 99.8049% ( 3) 00:17:58.289 5.707 - 5.733: 99.8098% ( 1) 00:17:58.289 5.733 - 5.760: 99.8147% ( 1) 00:17:58.289 5.787 - 5.813: 99.8196% ( 1) 00:17:58.289 5.813 - 5.840: 99.8244% ( 1) 00:17:58.289 5.893 - 5.920: 99.8293% ( 1) 00:17:58.289 6.027 - 6.053: 99.8342% ( 1) 00:17:58.289 6.107 - 6.133: 99.8439% ( 2) 00:17:58.289 6.133 - 6.160: 99.8488% ( 1) 00:17:58.289 6.187 - 6.213: 99.8537% ( 1) 00:17:58.289 6.213 - 6.240: 99.8635% ( 2) 00:17:58.289 6.267 - 6.293: 99.8683% ( 1) 00:17:58.289 6.293 - 6.320: 99.8781% ( 2) 00:17:58.289 6.373 - 6.400: 99.8830% ( 1) 00:17:58.289 6.533 - 6.560: 99.8927% ( 2) 00:17:58.289 6.587 - 6.613: 99.8976% ( 1) 00:17:58.289 6.747 - 6.773: 99.9025% ( 1) 00:17:58.289 6.880 - 6.933: 99.9073% ( 1) 00:17:58.289 7.040 - 7.093: 99.9122% ( 1) 00:17:58.289 7.093 - 7.147: 99.9220% ( 2) 00:17:58.289 12.160 - 12.213: 99.9269% ( 1) 00:17:58.289 3986.773 - 4014.080: 99.9951% ( 14) 00:17:58.289 4014.080 - 4041.387: 100.0000% ( 1) 00:17:58.289 00:17:58.289 Complete histogram 00:17:58.289 ================== 00:17:58.289 Range in us Cumulative Count 00:17:58.289 1.633 - 1.640: 0.0049% ( 1) 00:17:58.289 1.640 - [2024-11-26 07:27:26.064666] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:58.289 1.647: 0.0146% ( 2) 00:17:58.289 1.647 - 1.653: 0.0585% ( 9) 00:17:58.289 1.653 - 1.660: 0.6486% ( 121) 00:17:58.289 1.660 - 1.667: 0.7071% ( 12) 00:17:58.289 1.667 - 1.673: 0.7461% ( 8) 00:17:58.289 1.673 - 1.680: 0.8144% ( 14) 00:17:58.289 1.680 - 1.687: 0.8680% ( 11) 00:17:58.289 1.687 - 1.693: 0.9022% ( 7) 00:17:58.289 1.693 - 1.700: 18.5409% ( 3617) 00:17:58.289 1.700 - 1.707: 55.1351% ( 7504) 00:17:58.289 1.707 - 1.720: 68.3263% ( 2705) 00:17:58.289 1.720 - 1.733: 80.1473% ( 2424) 00:17:58.289 1.733 - 1.747: 83.5121% ( 690) 00:17:58.289 1.747 - 1.760: 85.2238% ( 351) 00:17:58.289 1.760 - 1.773: 90.1053% ( 1001) 00:17:58.289 1.773 - 1.787: 95.5476% ( 1116) 00:17:58.289 1.787 - 1.800: 98.1420% ( 532) 00:17:58.289 1.800 - 1.813: 99.2002% ( 217) 00:17:58.289 1.813 - 1.827: 99.4831% ( 58) 00:17:58.289 1.827 - 1.840: 99.5026% ( 4) 00:17:58.289 3.493 - 3.520: 99.5075% ( 1) 00:17:58.289 3.760 - 3.787: 99.5123% ( 1) 00:17:58.289 3.947 - 3.973: 99.5172% ( 1) 00:17:58.289 4.053 - 4.080: 99.5221% ( 1) 00:17:58.289 4.080 - 4.107: 99.5270% ( 1) 00:17:58.289 4.240 - 4.267: 99.5318% ( 1) 00:17:58.289 4.373 - 4.400: 99.5367% ( 1) 00:17:58.289 4.427 - 4.453: 99.5416% ( 1) 00:17:58.289 4.480 - 4.507: 99.5465% ( 1) 00:17:58.289 4.560 - 4.587: 99.5514% ( 1) 00:17:58.289 4.587 - 4.613: 99.5611% ( 2) 00:17:58.289 4.667 - 4.693: 99.5660% ( 1) 00:17:58.289 4.693 - 4.720: 99.5709% ( 1) 00:17:58.289 4.720 - 4.747: 99.5757% ( 1) 00:17:58.289 4.827 - 4.853: 99.5806% ( 1) 00:17:58.289 4.933 - 4.960: 99.5855% ( 1) 00:17:58.289 4.960 - 4.987: 99.5952% ( 2) 00:17:58.289 5.147 - 5.173: 99.6001% ( 1) 00:17:58.289 5.173 - 5.200: 99.6050% ( 1) 00:17:58.289 5.253 - 5.280: 99.6099% ( 1) 00:17:58.289 5.547 - 5.573: 99.6147% ( 1) 00:17:58.289 5.573 - 5.600: 99.6196% ( 1) 00:17:58.289 8.320 - 8.373: 99.6245% ( 1) 00:17:58.289 11.093 - 11.147: 99.6294% ( 1) 00:17:58.289 3986.773 - 4014.080: 99.9902% ( 74) 00:17:58.289 4014.080 - 4041.387: 99.9951% ( 1) 00:17:58.289 6963.200 - 6990.507: 100.0000% ( 1) 00:17:58.289 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:58.289 [ 00:17:58.289 { 00:17:58.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:58.289 "subtype": "Discovery", 00:17:58.289 "listen_addresses": [], 00:17:58.289 "allow_any_host": true, 00:17:58.289 "hosts": [] 00:17:58.289 }, 00:17:58.289 { 00:17:58.289 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:58.289 "subtype": "NVMe", 00:17:58.289 "listen_addresses": [ 00:17:58.289 { 00:17:58.289 "trtype": "VFIOUSER", 00:17:58.289 "adrfam": "IPv4", 00:17:58.289 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:58.289 "trsvcid": "0" 00:17:58.289 } 00:17:58.289 ], 00:17:58.289 "allow_any_host": true, 00:17:58.289 "hosts": [], 00:17:58.289 "serial_number": "SPDK1", 00:17:58.289 "model_number": "SPDK bdev Controller", 00:17:58.289 "max_namespaces": 32, 00:17:58.289 "min_cntlid": 1, 00:17:58.289 "max_cntlid": 65519, 00:17:58.289 "namespaces": [ 00:17:58.289 { 00:17:58.289 "nsid": 1, 00:17:58.289 "bdev_name": "Malloc1", 00:17:58.289 "name": "Malloc1", 00:17:58.289 "nguid": "7BB9C8AB213F4EA195253B1A739BFDBF", 00:17:58.289 "uuid": "7bb9c8ab-213f-4ea1-9525-3b1a739bfdbf" 00:17:58.289 }, 00:17:58.289 { 00:17:58.289 "nsid": 2, 00:17:58.289 "bdev_name": "Malloc3", 00:17:58.289 "name": "Malloc3", 00:17:58.289 "nguid": "8B98160DC5684B77AD682C4E454A5045", 00:17:58.289 "uuid": "8b98160d-c568-4b77-ad68-2c4e454a5045" 00:17:58.289 } 00:17:58.289 ] 00:17:58.289 }, 00:17:58.289 { 00:17:58.289 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:58.289 "subtype": "NVMe", 00:17:58.289 "listen_addresses": [ 00:17:58.289 { 00:17:58.289 "trtype": "VFIOUSER", 00:17:58.289 "adrfam": "IPv4", 00:17:58.289 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:58.289 "trsvcid": "0" 00:17:58.289 } 00:17:58.289 ], 00:17:58.289 "allow_any_host": true, 00:17:58.289 "hosts": [], 00:17:58.289 "serial_number": "SPDK2", 00:17:58.289 "model_number": "SPDK bdev Controller", 00:17:58.289 "max_namespaces": 32, 00:17:58.289 "min_cntlid": 1, 00:17:58.289 "max_cntlid": 65519, 00:17:58.289 "namespaces": [ 00:17:58.289 { 00:17:58.289 "nsid": 1, 00:17:58.289 "bdev_name": "Malloc2", 00:17:58.289 "name": "Malloc2", 00:17:58.289 "nguid": "73DF0DAEF0304672AB19C3625F996E7B", 00:17:58.289 "uuid": "73df0dae-f030-4672-ab19-c3625f996e7b" 00:17:58.289 } 00:17:58.289 ] 00:17:58.289 } 00:17:58.289 ] 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1418517 00:17:58.289 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:58.290 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:58.550 [2024-11-26 07:27:26.453560] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:58.550 Malloc4 00:17:58.550 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:58.550 [2024-11-26 07:27:26.639834] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:58.811 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:58.811 Asynchronous Event Request test 00:17:58.811 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:58.811 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:58.811 Registering asynchronous event callbacks... 00:17:58.811 Starting namespace attribute notice tests for all controllers... 00:17:58.811 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:58.811 aer_cb - Changed Namespace 00:17:58.811 Cleaning up... 00:17:58.811 [ 00:17:58.811 { 00:17:58.811 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:58.811 "subtype": "Discovery", 00:17:58.811 "listen_addresses": [], 00:17:58.811 "allow_any_host": true, 00:17:58.811 "hosts": [] 00:17:58.811 }, 00:17:58.811 { 00:17:58.811 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:58.811 "subtype": "NVMe", 00:17:58.811 "listen_addresses": [ 00:17:58.811 { 00:17:58.811 "trtype": "VFIOUSER", 00:17:58.811 "adrfam": "IPv4", 00:17:58.811 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:58.811 "trsvcid": "0" 00:17:58.811 } 00:17:58.811 ], 00:17:58.811 "allow_any_host": true, 00:17:58.811 "hosts": [], 00:17:58.811 "serial_number": "SPDK1", 00:17:58.811 "model_number": "SPDK bdev Controller", 00:17:58.811 "max_namespaces": 32, 00:17:58.811 "min_cntlid": 1, 00:17:58.811 "max_cntlid": 65519, 00:17:58.811 "namespaces": [ 00:17:58.811 { 00:17:58.811 "nsid": 1, 00:17:58.811 "bdev_name": "Malloc1", 00:17:58.811 "name": "Malloc1", 00:17:58.811 "nguid": "7BB9C8AB213F4EA195253B1A739BFDBF", 00:17:58.811 "uuid": "7bb9c8ab-213f-4ea1-9525-3b1a739bfdbf" 00:17:58.811 }, 00:17:58.811 { 00:17:58.811 "nsid": 2, 00:17:58.811 "bdev_name": "Malloc3", 00:17:58.811 "name": "Malloc3", 00:17:58.811 "nguid": "8B98160DC5684B77AD682C4E454A5045", 00:17:58.811 "uuid": "8b98160d-c568-4b77-ad68-2c4e454a5045" 00:17:58.811 } 00:17:58.811 ] 00:17:58.811 }, 00:17:58.811 { 00:17:58.811 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:58.811 "subtype": "NVMe", 00:17:58.811 "listen_addresses": [ 00:17:58.811 { 00:17:58.811 "trtype": "VFIOUSER", 00:17:58.812 "adrfam": "IPv4", 00:17:58.812 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:58.812 "trsvcid": "0" 00:17:58.812 } 00:17:58.812 ], 00:17:58.812 "allow_any_host": true, 00:17:58.812 "hosts": [], 00:17:58.812 "serial_number": "SPDK2", 00:17:58.812 "model_number": "SPDK bdev Controller", 00:17:58.812 "max_namespaces": 32, 00:17:58.812 "min_cntlid": 1, 00:17:58.812 "max_cntlid": 65519, 00:17:58.812 "namespaces": [ 00:17:58.812 { 00:17:58.812 "nsid": 1, 00:17:58.812 "bdev_name": "Malloc2", 00:17:58.812 "name": "Malloc2", 00:17:58.812 "nguid": "73DF0DAEF0304672AB19C3625F996E7B", 00:17:58.812 "uuid": "73df0dae-f030-4672-ab19-c3625f996e7b" 00:17:58.812 }, 00:17:58.812 { 00:17:58.812 "nsid": 2, 00:17:58.812 "bdev_name": "Malloc4", 00:17:58.812 "name": "Malloc4", 00:17:58.812 "nguid": "6AC7ADDE5FDC4AC8AB58F5BCD19F0DBC", 00:17:58.812 "uuid": "6ac7adde-5fdc-4ac8-ab58-f5bcd19f0dbc" 00:17:58.812 } 00:17:58.812 ] 00:17:58.812 } 00:17:58.812 ] 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1418517 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1408864 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1408864 ']' 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1408864 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.812 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1408864 00:17:59.072 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.072 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.072 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1408864' 00:17:59.072 killing process with pid 1408864 00:17:59.072 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1408864 00:17:59.072 07:27:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1408864 00:17:59.072 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1418549 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1418549' 00:17:59.073 Process pid: 1418549 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1418549 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1418549 ']' 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.073 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:59.073 [2024-11-26 07:27:27.112020] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:59.073 [2024-11-26 07:27:27.112957] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:17:59.073 [2024-11-26 07:27:27.113000] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.333 [2024-11-26 07:27:27.199015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.334 [2024-11-26 07:27:27.233361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.334 [2024-11-26 07:27:27.233393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.334 [2024-11-26 07:27:27.233399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.334 [2024-11-26 07:27:27.233403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.334 [2024-11-26 07:27:27.233407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.334 [2024-11-26 07:27:27.234712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.334 [2024-11-26 07:27:27.234862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.334 [2024-11-26 07:27:27.235012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.334 [2024-11-26 07:27:27.235014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.334 [2024-11-26 07:27:27.287286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:59.334 [2024-11-26 07:27:27.288240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:59.334 [2024-11-26 07:27:27.288526] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:59.334 [2024-11-26 07:27:27.289747] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:59.334 [2024-11-26 07:27:27.289784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:59.906 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.906 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:59.906 07:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:00.847 07:27:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:01.108 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:01.108 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:01.108 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.108 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:01.108 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:01.368 Malloc1 00:18:01.368 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:01.629 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:01.890 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:01.890 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.890 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:01.890 07:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:02.151 Malloc2 00:18:02.151 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:02.411 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:02.411 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1418549 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1418549 ']' 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1418549 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1418549 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1418549' 00:18:02.673 killing process with pid 1418549 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1418549 00:18:02.673 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1418549 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:02.934 00:18:02.934 real 0m51.142s 00:18:02.934 user 3m15.897s 00:18:02.934 sys 0m2.718s 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:02.934 ************************************ 00:18:02.934 END TEST nvmf_vfio_user 00:18:02.934 ************************************ 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.934 ************************************ 00:18:02.934 START TEST nvmf_vfio_user_nvme_compliance 00:18:02.934 ************************************ 00:18:02.934 07:27:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:03.197 * Looking for test storage... 00:18:03.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:03.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.197 --rc genhtml_branch_coverage=1 00:18:03.197 --rc genhtml_function_coverage=1 00:18:03.197 --rc genhtml_legend=1 00:18:03.197 --rc geninfo_all_blocks=1 00:18:03.197 --rc geninfo_unexecuted_blocks=1 00:18:03.197 00:18:03.197 ' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:03.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.197 --rc genhtml_branch_coverage=1 00:18:03.197 --rc genhtml_function_coverage=1 00:18:03.197 --rc genhtml_legend=1 00:18:03.197 --rc geninfo_all_blocks=1 00:18:03.197 --rc geninfo_unexecuted_blocks=1 00:18:03.197 00:18:03.197 ' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:03.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.197 --rc genhtml_branch_coverage=1 00:18:03.197 --rc genhtml_function_coverage=1 00:18:03.197 --rc genhtml_legend=1 00:18:03.197 --rc geninfo_all_blocks=1 00:18:03.197 --rc geninfo_unexecuted_blocks=1 00:18:03.197 00:18:03.197 ' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:03.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.197 --rc genhtml_branch_coverage=1 00:18:03.197 --rc genhtml_function_coverage=1 00:18:03.197 --rc genhtml_legend=1 00:18:03.197 --rc geninfo_all_blocks=1 00:18:03.197 --rc geninfo_unexecuted_blocks=1 00:18:03.197 00:18:03.197 ' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:03.197 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:03.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1419567 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1419567' 00:18:03.198 Process pid: 1419567 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1419567 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1419567 ']' 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.198 07:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:03.198 [2024-11-26 07:27:31.244945] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:18:03.198 [2024-11-26 07:27:31.245018] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.459 [2024-11-26 07:27:31.332971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.459 [2024-11-26 07:27:31.371652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.459 [2024-11-26 07:27:31.371693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.459 [2024-11-26 07:27:31.371699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.459 [2024-11-26 07:27:31.371704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.459 [2024-11-26 07:27:31.371708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.459 [2024-11-26 07:27:31.373012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.459 [2024-11-26 07:27:31.373195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.459 [2024-11-26 07:27:31.373213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.030 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.030 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:04.030 07:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:04.971 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:04.971 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:04.971 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:04.971 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.971 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 malloc0 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.232 07:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:05.232 00:18:05.232 00:18:05.232 CUnit - A unit testing framework for C - Version 2.1-3 00:18:05.232 http://cunit.sourceforge.net/ 00:18:05.232 00:18:05.232 00:18:05.232 Suite: nvme_compliance 00:18:05.232 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 07:27:33.293633] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:05.232 [2024-11-26 07:27:33.294933] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:05.232 [2024-11-26 07:27:33.294945] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:05.232 [2024-11-26 07:27:33.294950] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:05.232 [2024-11-26 07:27:33.296645] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:05.232 passed 00:18:05.493 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 07:27:33.371137] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:05.493 [2024-11-26 07:27:33.374157] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:05.493 passed 00:18:05.493 Test: admin_identify_ns ...[2024-11-26 07:27:33.454526] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:05.493 [2024-11-26 07:27:33.515168] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:05.493 [2024-11-26 07:27:33.523169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:05.493 [2024-11-26 07:27:33.544247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:05.493 passed 00:18:05.753 Test: admin_get_features_mandatory_features ...[2024-11-26 07:27:33.616499] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:05.753 [2024-11-26 07:27:33.619512] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:05.753 passed 00:18:05.753 Test: admin_get_features_optional_features ...[2024-11-26 07:27:33.695984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:05.753 [2024-11-26 07:27:33.699008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:05.753 passed 00:18:05.753 Test: admin_set_features_number_of_queues ...[2024-11-26 07:27:33.774731] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.018 [2024-11-26 07:27:33.879244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.018 passed 00:18:06.018 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 07:27:33.955294] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.018 [2024-11-26 07:27:33.958308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.018 passed 00:18:06.018 Test: admin_get_log_page_with_lpo ...[2024-11-26 07:27:34.033525] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.018 [2024-11-26 07:27:34.101167] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:06.279 [2024-11-26 07:27:34.114219] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.279 passed 00:18:06.279 Test: fabric_property_get ...[2024-11-26 07:27:34.190308] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.279 [2024-11-26 07:27:34.191507] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:06.279 [2024-11-26 07:27:34.193323] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.279 passed 00:18:06.279 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 07:27:34.269802] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.279 [2024-11-26 07:27:34.270995] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:06.279 [2024-11-26 07:27:34.272819] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.279 passed 00:18:06.279 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 07:27:34.347563] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.540 [2024-11-26 07:27:34.432171] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:06.540 [2024-11-26 07:27:34.448169] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:06.540 [2024-11-26 07:27:34.453237] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.540 passed 00:18:06.540 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 07:27:34.526483] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.540 [2024-11-26 07:27:34.527694] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:06.540 [2024-11-26 07:27:34.529505] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.540 passed 00:18:06.540 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 07:27:34.603233] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.801 [2024-11-26 07:27:34.681235] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:06.801 [2024-11-26 07:27:34.705166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:06.801 [2024-11-26 07:27:34.710229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.801 passed 00:18:06.801 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 07:27:34.783425] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:06.801 [2024-11-26 07:27:34.784615] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:06.801 [2024-11-26 07:27:34.784633] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:06.801 [2024-11-26 07:27:34.786437] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:06.801 passed 00:18:06.801 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 07:27:34.861505] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:07.062 [2024-11-26 07:27:34.953165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:07.062 [2024-11-26 07:27:34.961165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:07.062 [2024-11-26 07:27:34.969166] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:07.062 [2024-11-26 07:27:34.977164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:07.062 [2024-11-26 07:27:35.009248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:07.062 passed 00:18:07.062 Test: admin_create_io_sq_verify_pc ...[2024-11-26 07:27:35.080439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:07.062 [2024-11-26 07:27:35.099172] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:07.062 [2024-11-26 07:27:35.116614] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:07.062 passed 00:18:07.322 Test: admin_create_io_qp_max_qps ...[2024-11-26 07:27:35.192066] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:08.264 [2024-11-26 07:27:36.288168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:18:08.834 [2024-11-26 07:27:36.674721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:08.834 passed 00:18:08.834 Test: admin_create_io_sq_shared_cq ...[2024-11-26 07:27:36.747484] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:08.834 [2024-11-26 07:27:36.879170] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:08.834 [2024-11-26 07:27:36.916218] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:09.095 passed 00:18:09.095 00:18:09.095 Run Summary: Type Total Ran Passed Failed Inactive 00:18:09.095 suites 1 1 n/a 0 0 00:18:09.095 tests 18 18 18 0 0 00:18:09.095 asserts 360 360 360 0 n/a 00:18:09.095 00:18:09.095 Elapsed time = 1.490 seconds 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1419567 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1419567 ']' 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1419567 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.095 07:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1419567 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1419567' 00:18:09.095 killing process with pid 1419567 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1419567 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1419567 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:09.095 00:18:09.095 real 0m6.201s 00:18:09.095 user 0m17.544s 00:18:09.095 sys 0m0.548s 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.095 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:09.095 ************************************ 00:18:09.095 END TEST nvmf_vfio_user_nvme_compliance 00:18:09.095 ************************************ 00:18:09.096 07:27:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:09.096 07:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.096 07:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.096 07:27:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.356 ************************************ 00:18:09.356 START TEST nvmf_vfio_user_fuzz 00:18:09.356 ************************************ 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:09.357 * Looking for test storage... 00:18:09.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:09.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.357 --rc genhtml_branch_coverage=1 00:18:09.357 --rc genhtml_function_coverage=1 00:18:09.357 --rc genhtml_legend=1 00:18:09.357 --rc geninfo_all_blocks=1 00:18:09.357 --rc geninfo_unexecuted_blocks=1 00:18:09.357 00:18:09.357 ' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:09.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.357 --rc genhtml_branch_coverage=1 00:18:09.357 --rc genhtml_function_coverage=1 00:18:09.357 --rc genhtml_legend=1 00:18:09.357 --rc geninfo_all_blocks=1 00:18:09.357 --rc geninfo_unexecuted_blocks=1 00:18:09.357 00:18:09.357 ' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:09.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.357 --rc genhtml_branch_coverage=1 00:18:09.357 --rc genhtml_function_coverage=1 00:18:09.357 --rc genhtml_legend=1 00:18:09.357 --rc geninfo_all_blocks=1 00:18:09.357 --rc geninfo_unexecuted_blocks=1 00:18:09.357 00:18:09.357 ' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:09.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.357 --rc genhtml_branch_coverage=1 00:18:09.357 --rc genhtml_function_coverage=1 00:18:09.357 --rc genhtml_legend=1 00:18:09.357 --rc geninfo_all_blocks=1 00:18:09.357 --rc geninfo_unexecuted_blocks=1 00:18:09.357 00:18:09.357 ' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.357 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1420704 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1420704' 00:18:09.617 Process pid: 1420704 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1420704 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1420704 ']' 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.617 07:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:10.559 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.559 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:10.559 07:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 malloc0 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:11.501 07:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:43.616 Fuzzing completed. Shutting down the fuzz application 00:18:43.616 00:18:43.616 Dumping successful admin opcodes: 00:18:43.616 9, 10, 00:18:43.616 Dumping successful io opcodes: 00:18:43.616 0, 00:18:43.616 NS: 0x20000081ef00 I/O qp, Total commands completed: 1385196, total successful commands: 5438, random_seed: 2378278208 00:18:43.616 NS: 0x20000081ef00 admin qp, Total commands completed: 339344, total successful commands: 91, random_seed: 1232145472 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1420704 ']' 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1420704' 00:18:43.616 killing process with pid 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1420704 00:18:43.616 07:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:43.616 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:43.616 00:18:43.616 real 0m32.801s 00:18:43.616 user 0m37.693s 00:18:43.616 sys 0m24.161s 00:18:43.616 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.616 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.616 ************************************ 00:18:43.616 END TEST nvmf_vfio_user_fuzz 00:18:43.616 ************************************ 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.617 ************************************ 00:18:43.617 START TEST nvmf_auth_target 00:18:43.617 ************************************ 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:43.617 * Looking for test storage... 00:18:43.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.617 --rc genhtml_branch_coverage=1 00:18:43.617 --rc genhtml_function_coverage=1 00:18:43.617 --rc genhtml_legend=1 00:18:43.617 --rc geninfo_all_blocks=1 00:18:43.617 --rc geninfo_unexecuted_blocks=1 00:18:43.617 00:18:43.617 ' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.617 --rc genhtml_branch_coverage=1 00:18:43.617 --rc genhtml_function_coverage=1 00:18:43.617 --rc genhtml_legend=1 00:18:43.617 --rc geninfo_all_blocks=1 00:18:43.617 --rc geninfo_unexecuted_blocks=1 00:18:43.617 00:18:43.617 ' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.617 --rc genhtml_branch_coverage=1 00:18:43.617 --rc genhtml_function_coverage=1 00:18:43.617 --rc genhtml_legend=1 00:18:43.617 --rc geninfo_all_blocks=1 00:18:43.617 --rc geninfo_unexecuted_blocks=1 00:18:43.617 00:18:43.617 ' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.617 --rc genhtml_branch_coverage=1 00:18:43.617 --rc genhtml_function_coverage=1 00:18:43.617 --rc genhtml_legend=1 00:18:43.617 --rc geninfo_all_blocks=1 00:18:43.617 --rc geninfo_unexecuted_blocks=1 00:18:43.617 00:18:43.617 ' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.617 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.618 07:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:50.214 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:50.214 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:50.214 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:50.214 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.214 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:50.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:18:50.215 00:18:50.215 --- 10.0.0.2 ping statistics --- 00:18:50.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.215 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:18:50.215 00:18:50.215 --- 10.0.0.1 ping statistics --- 00:18:50.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.215 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1430800 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1430800 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1430800 ']' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.215 07:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1431031 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e02c4ce7615915d8f0d26639780aaf3cf048b80f64526fcf 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aa6 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e02c4ce7615915d8f0d26639780aaf3cf048b80f64526fcf 0 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e02c4ce7615915d8f0d26639780aaf3cf048b80f64526fcf 0 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e02c4ce7615915d8f0d26639780aaf3cf048b80f64526fcf 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aa6 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aa6 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.aa6 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d12901c66b97643f199bf4342a92baf4486e5964dadb90f2b333c4f35e241f03 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GQI 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d12901c66b97643f199bf4342a92baf4486e5964dadb90f2b333c4f35e241f03 3 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d12901c66b97643f199bf4342a92baf4486e5964dadb90f2b333c4f35e241f03 3 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d12901c66b97643f199bf4342a92baf4486e5964dadb90f2b333c4f35e241f03 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:50.788 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GQI 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GQI 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.GQI 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b06198a918ea831e1b021f42527e20c6 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7Yz 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b06198a918ea831e1b021f42527e20c6 1 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b06198a918ea831e1b021f42527e20c6 1 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b06198a918ea831e1b021f42527e20c6 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7Yz 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7Yz 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.7Yz 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=669301357b8ca5263de0dde4382179ca010e5b188f002a9d 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lHV 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 669301357b8ca5263de0dde4382179ca010e5b188f002a9d 2 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 669301357b8ca5263de0dde4382179ca010e5b188f002a9d 2 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=669301357b8ca5263de0dde4382179ca010e5b188f002a9d 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:51.051 07:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lHV 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lHV 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.lHV 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a1333bd2822cac4b766f24a80867a672295490848cc714a 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Fhi 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a1333bd2822cac4b766f24a80867a672295490848cc714a 2 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a1333bd2822cac4b766f24a80867a672295490848cc714a 2 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a1333bd2822cac4b766f24a80867a672295490848cc714a 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Fhi 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Fhi 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Fhi 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6659f7faab3e12ddbbc9b4781b481dc9 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.C26 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6659f7faab3e12ddbbc9b4781b481dc9 1 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6659f7faab3e12ddbbc9b4781b481dc9 1 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6659f7faab3e12ddbbc9b4781b481dc9 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:18:51.051 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.C26 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.C26 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.C26 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c4bc101bfa57a00998972e16933fcef75de3168ff18ef4a024335aefc01fb12 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ak8 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c4bc101bfa57a00998972e16933fcef75de3168ff18ef4a024335aefc01fb12 3 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c4bc101bfa57a00998972e16933fcef75de3168ff18ef4a024335aefc01fb12 3 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c4bc101bfa57a00998972e16933fcef75de3168ff18ef4a024335aefc01fb12 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ak8 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ak8 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ak8 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1430800 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1430800 ']' 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.314 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1431031 /var/tmp/host.sock 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1431031 ']' 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:51.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aa6 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aa6 00:18:51.580 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aa6 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.GQI ]] 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GQI 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GQI 00:18:51.842 07:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GQI 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7Yz 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7Yz 00:18:52.102 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7Yz 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.lHV ]] 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lHV 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lHV 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lHV 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fhi 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Fhi 00:18:52.363 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Fhi 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.C26 ]] 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C26 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C26 00:18:52.625 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C26 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ak8 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ak8 00:18:52.887 07:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ak8 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.150 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.412 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.412 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.674 { 00:18:53.674 "cntlid": 1, 00:18:53.674 "qid": 0, 00:18:53.674 "state": "enabled", 00:18:53.674 "thread": "nvmf_tgt_poll_group_000", 00:18:53.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.674 "listen_address": { 00:18:53.674 "trtype": "TCP", 00:18:53.674 "adrfam": "IPv4", 00:18:53.674 "traddr": "10.0.0.2", 00:18:53.674 "trsvcid": "4420" 00:18:53.674 }, 00:18:53.674 "peer_address": { 00:18:53.674 "trtype": "TCP", 00:18:53.674 "adrfam": "IPv4", 00:18:53.674 "traddr": "10.0.0.1", 00:18:53.674 "trsvcid": "38796" 00:18:53.674 }, 00:18:53.674 "auth": { 00:18:53.674 "state": "completed", 00:18:53.674 "digest": "sha256", 00:18:53.674 "dhgroup": "null" 00:18:53.674 } 00:18:53.674 } 00:18:53.674 ]' 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:53.674 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.936 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.936 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.936 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.936 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.936 07:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.198 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:18:54.198 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.769 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.030 07:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.292 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.292 { 00:18:55.292 "cntlid": 3, 00:18:55.292 "qid": 0, 00:18:55.292 "state": "enabled", 00:18:55.292 "thread": "nvmf_tgt_poll_group_000", 00:18:55.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.292 "listen_address": { 00:18:55.292 "trtype": "TCP", 00:18:55.292 "adrfam": "IPv4", 00:18:55.292 "traddr": "10.0.0.2", 00:18:55.292 "trsvcid": "4420" 00:18:55.292 }, 00:18:55.292 "peer_address": { 00:18:55.292 "trtype": "TCP", 00:18:55.292 "adrfam": "IPv4", 00:18:55.292 "traddr": "10.0.0.1", 00:18:55.292 "trsvcid": "38822" 00:18:55.292 }, 00:18:55.292 "auth": { 00:18:55.292 "state": "completed", 00:18:55.292 "digest": "sha256", 00:18:55.292 "dhgroup": "null" 00:18:55.292 } 00:18:55.292 } 00:18:55.292 ]' 00:18:55.292 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.553 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.813 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:18:55.813 07:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.384 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.646 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.906 00:18:56.906 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.906 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.907 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.907 { 00:18:56.907 "cntlid": 5, 00:18:56.907 "qid": 0, 00:18:56.907 "state": "enabled", 00:18:56.907 "thread": "nvmf_tgt_poll_group_000", 00:18:56.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.907 "listen_address": { 00:18:56.907 "trtype": "TCP", 00:18:56.907 "adrfam": "IPv4", 00:18:56.907 "traddr": "10.0.0.2", 00:18:56.907 "trsvcid": "4420" 00:18:56.907 }, 00:18:56.907 "peer_address": { 00:18:56.907 "trtype": "TCP", 00:18:56.907 "adrfam": "IPv4", 00:18:56.907 "traddr": "10.0.0.1", 00:18:56.907 "trsvcid": "32802" 00:18:56.907 }, 00:18:56.907 "auth": { 00:18:56.907 "state": "completed", 00:18:56.907 "digest": "sha256", 00:18:56.907 "dhgroup": "null" 00:18:56.907 } 00:18:56.907 } 00:18:56.907 ]' 00:18:57.167 07:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.167 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.428 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:18:57.428 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.000 07:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.261 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.522 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.522 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.522 { 00:18:58.522 "cntlid": 7, 00:18:58.522 "qid": 0, 00:18:58.522 "state": "enabled", 00:18:58.522 "thread": "nvmf_tgt_poll_group_000", 00:18:58.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.522 "listen_address": { 00:18:58.522 "trtype": "TCP", 00:18:58.522 "adrfam": "IPv4", 00:18:58.522 "traddr": "10.0.0.2", 00:18:58.522 "trsvcid": "4420" 00:18:58.522 }, 00:18:58.522 "peer_address": { 00:18:58.522 "trtype": "TCP", 00:18:58.522 "adrfam": "IPv4", 00:18:58.522 "traddr": "10.0.0.1", 00:18:58.522 "trsvcid": "32830" 00:18:58.522 }, 00:18:58.522 "auth": { 00:18:58.522 "state": "completed", 00:18:58.522 "digest": "sha256", 00:18:58.522 "dhgroup": "null" 00:18:58.522 } 00:18:58.522 } 00:18:58.522 ]' 00:18:58.523 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.783 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.044 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:18:59.044 07:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.617 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.877 07:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.137 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.137 { 00:19:00.137 "cntlid": 9, 00:19:00.137 "qid": 0, 00:19:00.137 "state": "enabled", 00:19:00.137 "thread": "nvmf_tgt_poll_group_000", 00:19:00.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.137 "listen_address": { 00:19:00.137 "trtype": "TCP", 00:19:00.137 "adrfam": "IPv4", 00:19:00.137 "traddr": "10.0.0.2", 00:19:00.137 "trsvcid": "4420" 00:19:00.137 }, 00:19:00.137 "peer_address": { 00:19:00.137 "trtype": "TCP", 00:19:00.137 "adrfam": "IPv4", 00:19:00.137 "traddr": "10.0.0.1", 00:19:00.137 "trsvcid": "32860" 00:19:00.137 }, 00:19:00.137 "auth": { 00:19:00.137 "state": "completed", 00:19:00.137 "digest": "sha256", 00:19:00.137 "dhgroup": "ffdhe2048" 00:19:00.137 } 00:19:00.137 } 00:19:00.137 ]' 00:19:00.137 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.398 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.659 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:00.659 07:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.231 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.490 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.750 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.750 { 00:19:01.750 "cntlid": 11, 00:19:01.750 "qid": 0, 00:19:01.750 "state": "enabled", 00:19:01.750 "thread": "nvmf_tgt_poll_group_000", 00:19:01.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.750 "listen_address": { 00:19:01.750 "trtype": "TCP", 00:19:01.750 "adrfam": "IPv4", 00:19:01.750 "traddr": "10.0.0.2", 00:19:01.750 "trsvcid": "4420" 00:19:01.750 }, 00:19:01.750 "peer_address": { 00:19:01.750 "trtype": "TCP", 00:19:01.750 "adrfam": "IPv4", 00:19:01.750 "traddr": "10.0.0.1", 00:19:01.750 "trsvcid": "32882" 00:19:01.750 }, 00:19:01.750 "auth": { 00:19:01.750 "state": "completed", 00:19:01.750 "digest": "sha256", 00:19:01.750 "dhgroup": "ffdhe2048" 00:19:01.750 } 00:19:01.750 } 00:19:01.750 ]' 00:19:01.750 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.011 07:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.273 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:02.273 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.843 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.104 07:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.364 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.364 { 00:19:03.364 "cntlid": 13, 00:19:03.364 "qid": 0, 00:19:03.364 "state": "enabled", 00:19:03.364 "thread": "nvmf_tgt_poll_group_000", 00:19:03.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.364 "listen_address": { 00:19:03.364 "trtype": "TCP", 00:19:03.364 "adrfam": "IPv4", 00:19:03.364 "traddr": "10.0.0.2", 00:19:03.364 "trsvcid": "4420" 00:19:03.364 }, 00:19:03.364 "peer_address": { 00:19:03.364 "trtype": "TCP", 00:19:03.364 "adrfam": "IPv4", 00:19:03.364 "traddr": "10.0.0.1", 00:19:03.364 "trsvcid": "32914" 00:19:03.364 }, 00:19:03.364 "auth": { 00:19:03.364 "state": "completed", 00:19:03.364 "digest": "sha256", 00:19:03.364 "dhgroup": "ffdhe2048" 00:19:03.364 } 00:19:03.364 } 00:19:03.364 ]' 00:19:03.364 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.626 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.886 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:03.886 07:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.456 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.764 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.764 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.061 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.061 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.061 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.061 07:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.061 { 00:19:05.061 "cntlid": 15, 00:19:05.061 "qid": 0, 00:19:05.061 "state": "enabled", 00:19:05.061 "thread": "nvmf_tgt_poll_group_000", 00:19:05.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.061 "listen_address": { 00:19:05.061 "trtype": "TCP", 00:19:05.061 "adrfam": "IPv4", 00:19:05.061 "traddr": "10.0.0.2", 00:19:05.061 "trsvcid": "4420" 00:19:05.061 }, 00:19:05.061 "peer_address": { 00:19:05.061 "trtype": "TCP", 00:19:05.061 "adrfam": "IPv4", 00:19:05.061 "traddr": "10.0.0.1", 00:19:05.061 "trsvcid": "32946" 00:19:05.061 }, 00:19:05.061 "auth": { 00:19:05.061 "state": "completed", 00:19:05.061 "digest": "sha256", 00:19:05.061 "dhgroup": "ffdhe2048" 00:19:05.061 } 00:19:05.061 } 00:19:05.061 ]' 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.061 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.326 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:05.326 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:05.897 07:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.158 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.418 00:19:06.418 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.418 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.418 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.679 { 00:19:06.679 "cntlid": 17, 00:19:06.679 "qid": 0, 00:19:06.679 "state": "enabled", 00:19:06.679 "thread": "nvmf_tgt_poll_group_000", 00:19:06.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.679 "listen_address": { 00:19:06.679 "trtype": "TCP", 00:19:06.679 "adrfam": "IPv4", 00:19:06.679 "traddr": "10.0.0.2", 00:19:06.679 "trsvcid": "4420" 00:19:06.679 }, 00:19:06.679 "peer_address": { 00:19:06.679 "trtype": "TCP", 00:19:06.679 "adrfam": "IPv4", 00:19:06.679 "traddr": "10.0.0.1", 00:19:06.679 "trsvcid": "32978" 00:19:06.679 }, 00:19:06.679 "auth": { 00:19:06.679 "state": "completed", 00:19:06.679 "digest": "sha256", 00:19:06.679 "dhgroup": "ffdhe3072" 00:19:06.679 } 00:19:06.679 } 00:19:06.679 ]' 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.679 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.940 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.940 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:06.940 07:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.882 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.883 07:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.144 00:19:08.144 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.144 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.144 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.405 { 00:19:08.405 "cntlid": 19, 00:19:08.405 "qid": 0, 00:19:08.405 "state": "enabled", 00:19:08.405 "thread": "nvmf_tgt_poll_group_000", 00:19:08.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.405 "listen_address": { 00:19:08.405 "trtype": "TCP", 00:19:08.405 "adrfam": "IPv4", 00:19:08.405 "traddr": "10.0.0.2", 00:19:08.405 "trsvcid": "4420" 00:19:08.405 }, 00:19:08.405 "peer_address": { 00:19:08.405 "trtype": "TCP", 00:19:08.405 "adrfam": "IPv4", 00:19:08.405 "traddr": "10.0.0.1", 00:19:08.405 "trsvcid": "41294" 00:19:08.405 }, 00:19:08.405 "auth": { 00:19:08.405 "state": "completed", 00:19:08.405 "digest": "sha256", 00:19:08.405 "dhgroup": "ffdhe3072" 00:19:08.405 } 00:19:08.405 } 00:19:08.405 ]' 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.405 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.665 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:08.665 07:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:09.236 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.497 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.757 00:19:09.757 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.757 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.757 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.018 { 00:19:10.018 "cntlid": 21, 00:19:10.018 "qid": 0, 00:19:10.018 "state": "enabled", 00:19:10.018 "thread": "nvmf_tgt_poll_group_000", 00:19:10.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:10.018 "listen_address": { 00:19:10.018 "trtype": "TCP", 00:19:10.018 "adrfam": "IPv4", 00:19:10.018 "traddr": "10.0.0.2", 00:19:10.018 "trsvcid": "4420" 00:19:10.018 }, 00:19:10.018 "peer_address": { 00:19:10.018 "trtype": "TCP", 00:19:10.018 "adrfam": "IPv4", 00:19:10.018 "traddr": "10.0.0.1", 00:19:10.018 "trsvcid": "41330" 00:19:10.018 }, 00:19:10.018 "auth": { 00:19:10.018 "state": "completed", 00:19:10.018 "digest": "sha256", 00:19:10.018 "dhgroup": "ffdhe3072" 00:19:10.018 } 00:19:10.018 } 00:19:10.018 ]' 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.018 07:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.018 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.018 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.018 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.280 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:10.280 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.853 07:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.116 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.377 00:19:11.377 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.377 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.377 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.638 { 00:19:11.638 "cntlid": 23, 00:19:11.638 "qid": 0, 00:19:11.638 "state": "enabled", 00:19:11.638 "thread": "nvmf_tgt_poll_group_000", 00:19:11.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.638 "listen_address": { 00:19:11.638 "trtype": "TCP", 00:19:11.638 "adrfam": "IPv4", 00:19:11.638 "traddr": "10.0.0.2", 00:19:11.638 "trsvcid": "4420" 00:19:11.638 }, 00:19:11.638 "peer_address": { 00:19:11.638 "trtype": "TCP", 00:19:11.638 "adrfam": "IPv4", 00:19:11.638 "traddr": "10.0.0.1", 00:19:11.638 "trsvcid": "41346" 00:19:11.638 }, 00:19:11.638 "auth": { 00:19:11.638 "state": "completed", 00:19:11.638 "digest": "sha256", 00:19:11.638 "dhgroup": "ffdhe3072" 00:19:11.638 } 00:19:11.638 } 00:19:11.638 ]' 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.638 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.900 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:11.900 07:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.470 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.731 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.991 00:19:12.991 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.991 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.991 07:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.250 { 00:19:13.250 "cntlid": 25, 00:19:13.250 "qid": 0, 00:19:13.250 "state": "enabled", 00:19:13.250 "thread": "nvmf_tgt_poll_group_000", 00:19:13.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.250 "listen_address": { 00:19:13.250 "trtype": "TCP", 00:19:13.250 "adrfam": "IPv4", 00:19:13.250 "traddr": "10.0.0.2", 00:19:13.250 "trsvcid": "4420" 00:19:13.250 }, 00:19:13.250 "peer_address": { 00:19:13.250 "trtype": "TCP", 00:19:13.250 "adrfam": "IPv4", 00:19:13.250 "traddr": "10.0.0.1", 00:19:13.250 "trsvcid": "41362" 00:19:13.250 }, 00:19:13.250 "auth": { 00:19:13.250 "state": "completed", 00:19:13.250 "digest": "sha256", 00:19:13.250 "dhgroup": "ffdhe4096" 00:19:13.250 } 00:19:13.250 } 00:19:13.250 ]' 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.250 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.510 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:13.510 07:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.080 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.341 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.602 00:19:14.602 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.602 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.602 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.862 { 00:19:14.862 "cntlid": 27, 00:19:14.862 "qid": 0, 00:19:14.862 "state": "enabled", 00:19:14.862 "thread": "nvmf_tgt_poll_group_000", 00:19:14.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.862 "listen_address": { 00:19:14.862 "trtype": "TCP", 00:19:14.862 "adrfam": "IPv4", 00:19:14.862 "traddr": "10.0.0.2", 00:19:14.862 "trsvcid": "4420" 00:19:14.862 }, 00:19:14.862 "peer_address": { 00:19:14.862 "trtype": "TCP", 00:19:14.862 "adrfam": "IPv4", 00:19:14.862 "traddr": "10.0.0.1", 00:19:14.862 "trsvcid": "41390" 00:19:14.862 }, 00:19:14.862 "auth": { 00:19:14.862 "state": "completed", 00:19:14.862 "digest": "sha256", 00:19:14.862 "dhgroup": "ffdhe4096" 00:19:14.862 } 00:19:14.862 } 00:19:14.862 ]' 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.862 07:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.122 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:15.122 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.693 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.954 07:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.214 00:19:16.214 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.214 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.214 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.475 { 00:19:16.475 "cntlid": 29, 00:19:16.475 "qid": 0, 00:19:16.475 "state": "enabled", 00:19:16.475 "thread": "nvmf_tgt_poll_group_000", 00:19:16.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.475 "listen_address": { 00:19:16.475 "trtype": "TCP", 00:19:16.475 "adrfam": "IPv4", 00:19:16.475 "traddr": "10.0.0.2", 00:19:16.475 "trsvcid": "4420" 00:19:16.475 }, 00:19:16.475 "peer_address": { 00:19:16.475 "trtype": "TCP", 00:19:16.475 "adrfam": "IPv4", 00:19:16.475 "traddr": "10.0.0.1", 00:19:16.475 "trsvcid": "41418" 00:19:16.475 }, 00:19:16.475 "auth": { 00:19:16.475 "state": "completed", 00:19:16.475 "digest": "sha256", 00:19:16.475 "dhgroup": "ffdhe4096" 00:19:16.475 } 00:19:16.475 } 00:19:16.475 ]' 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.475 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.736 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:16.736 07:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.307 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.568 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.828 00:19:17.828 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.828 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.828 07:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.090 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.090 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.090 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.091 { 00:19:18.091 "cntlid": 31, 00:19:18.091 "qid": 0, 00:19:18.091 "state": "enabled", 00:19:18.091 "thread": "nvmf_tgt_poll_group_000", 00:19:18.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.091 "listen_address": { 00:19:18.091 "trtype": "TCP", 00:19:18.091 "adrfam": "IPv4", 00:19:18.091 "traddr": "10.0.0.2", 00:19:18.091 "trsvcid": "4420" 00:19:18.091 }, 00:19:18.091 "peer_address": { 00:19:18.091 "trtype": "TCP", 00:19:18.091 "adrfam": "IPv4", 00:19:18.091 "traddr": "10.0.0.1", 00:19:18.091 "trsvcid": "59214" 00:19:18.091 }, 00:19:18.091 "auth": { 00:19:18.091 "state": "completed", 00:19:18.091 "digest": "sha256", 00:19:18.091 "dhgroup": "ffdhe4096" 00:19:18.091 } 00:19:18.091 } 00:19:18.091 ]' 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.091 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.352 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:18.352 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:18.923 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.923 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.923 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.923 07:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.923 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.923 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.923 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.923 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.923 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.183 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.445 00:19:19.445 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.445 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.445 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.705 { 00:19:19.705 "cntlid": 33, 00:19:19.705 "qid": 0, 00:19:19.705 "state": "enabled", 00:19:19.705 "thread": "nvmf_tgt_poll_group_000", 00:19:19.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.705 "listen_address": { 00:19:19.705 "trtype": "TCP", 00:19:19.705 "adrfam": "IPv4", 00:19:19.705 "traddr": "10.0.0.2", 00:19:19.705 "trsvcid": "4420" 00:19:19.705 }, 00:19:19.705 "peer_address": { 00:19:19.705 "trtype": "TCP", 00:19:19.705 "adrfam": "IPv4", 00:19:19.705 "traddr": "10.0.0.1", 00:19:19.705 "trsvcid": "59244" 00:19:19.705 }, 00:19:19.705 "auth": { 00:19:19.705 "state": "completed", 00:19:19.705 "digest": "sha256", 00:19:19.705 "dhgroup": "ffdhe6144" 00:19:19.705 } 00:19:19.705 } 00:19:19.705 ]' 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:19.705 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.965 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.965 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.965 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.965 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.965 07:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.965 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:19.965 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.907 07:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.168 00:19:21.168 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.168 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.168 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.428 { 00:19:21.428 "cntlid": 35, 00:19:21.428 "qid": 0, 00:19:21.428 "state": "enabled", 00:19:21.428 "thread": "nvmf_tgt_poll_group_000", 00:19:21.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.428 "listen_address": { 00:19:21.428 "trtype": "TCP", 00:19:21.428 "adrfam": "IPv4", 00:19:21.428 "traddr": "10.0.0.2", 00:19:21.428 "trsvcid": "4420" 00:19:21.428 }, 00:19:21.428 "peer_address": { 00:19:21.428 "trtype": "TCP", 00:19:21.428 "adrfam": "IPv4", 00:19:21.428 "traddr": "10.0.0.1", 00:19:21.428 "trsvcid": "59266" 00:19:21.428 }, 00:19:21.428 "auth": { 00:19:21.428 "state": "completed", 00:19:21.428 "digest": "sha256", 00:19:21.428 "dhgroup": "ffdhe6144" 00:19:21.428 } 00:19:21.428 } 00:19:21.428 ]' 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.428 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:21.688 07:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.631 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.892 00:19:22.892 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.892 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.892 07:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.153 { 00:19:23.153 "cntlid": 37, 00:19:23.153 "qid": 0, 00:19:23.153 "state": "enabled", 00:19:23.153 "thread": "nvmf_tgt_poll_group_000", 00:19:23.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.153 "listen_address": { 00:19:23.153 "trtype": "TCP", 00:19:23.153 "adrfam": "IPv4", 00:19:23.153 "traddr": "10.0.0.2", 00:19:23.153 "trsvcid": "4420" 00:19:23.153 }, 00:19:23.153 "peer_address": { 00:19:23.153 "trtype": "TCP", 00:19:23.153 "adrfam": "IPv4", 00:19:23.153 "traddr": "10.0.0.1", 00:19:23.153 "trsvcid": "59290" 00:19:23.153 }, 00:19:23.153 "auth": { 00:19:23.153 "state": "completed", 00:19:23.153 "digest": "sha256", 00:19:23.153 "dhgroup": "ffdhe6144" 00:19:23.153 } 00:19:23.153 } 00:19:23.153 ]' 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.153 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:23.414 07:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:24.356 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.356 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.356 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.357 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.618 00:19:24.618 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.618 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.618 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.879 { 00:19:24.879 "cntlid": 39, 00:19:24.879 "qid": 0, 00:19:24.879 "state": "enabled", 00:19:24.879 "thread": "nvmf_tgt_poll_group_000", 00:19:24.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.879 "listen_address": { 00:19:24.879 "trtype": "TCP", 00:19:24.879 "adrfam": "IPv4", 00:19:24.879 "traddr": "10.0.0.2", 00:19:24.879 "trsvcid": "4420" 00:19:24.879 }, 00:19:24.879 "peer_address": { 00:19:24.879 "trtype": "TCP", 00:19:24.879 "adrfam": "IPv4", 00:19:24.879 "traddr": "10.0.0.1", 00:19:24.879 "trsvcid": "59314" 00:19:24.879 }, 00:19:24.879 "auth": { 00:19:24.879 "state": "completed", 00:19:24.879 "digest": "sha256", 00:19:24.879 "dhgroup": "ffdhe6144" 00:19:24.879 } 00:19:24.879 } 00:19:24.879 ]' 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.879 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.140 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.140 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.140 07:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.140 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:25.140 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:25.711 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.972 07:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.972 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.972 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.972 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.972 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.544 00:19:26.544 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.544 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.544 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.805 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.805 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.805 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.805 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.806 { 00:19:26.806 "cntlid": 41, 00:19:26.806 "qid": 0, 00:19:26.806 "state": "enabled", 00:19:26.806 "thread": "nvmf_tgt_poll_group_000", 00:19:26.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.806 "listen_address": { 00:19:26.806 "trtype": "TCP", 00:19:26.806 "adrfam": "IPv4", 00:19:26.806 "traddr": "10.0.0.2", 00:19:26.806 "trsvcid": "4420" 00:19:26.806 }, 00:19:26.806 "peer_address": { 00:19:26.806 "trtype": "TCP", 00:19:26.806 "adrfam": "IPv4", 00:19:26.806 "traddr": "10.0.0.1", 00:19:26.806 "trsvcid": "59342" 00:19:26.806 }, 00:19:26.806 "auth": { 00:19:26.806 "state": "completed", 00:19:26.806 "digest": "sha256", 00:19:26.806 "dhgroup": "ffdhe8192" 00:19:26.806 } 00:19:26.806 } 00:19:26.806 ]' 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.806 07:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.065 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:27.065 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.637 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.899 07:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.471 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.471 { 00:19:28.471 "cntlid": 43, 00:19:28.471 "qid": 0, 00:19:28.471 "state": "enabled", 00:19:28.471 "thread": "nvmf_tgt_poll_group_000", 00:19:28.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.471 "listen_address": { 00:19:28.471 "trtype": "TCP", 00:19:28.471 "adrfam": "IPv4", 00:19:28.471 "traddr": "10.0.0.2", 00:19:28.471 "trsvcid": "4420" 00:19:28.471 }, 00:19:28.471 "peer_address": { 00:19:28.471 "trtype": "TCP", 00:19:28.471 "adrfam": "IPv4", 00:19:28.471 "traddr": "10.0.0.1", 00:19:28.471 "trsvcid": "42800" 00:19:28.471 }, 00:19:28.471 "auth": { 00:19:28.471 "state": "completed", 00:19:28.471 "digest": "sha256", 00:19:28.471 "dhgroup": "ffdhe8192" 00:19:28.471 } 00:19:28.471 } 00:19:28.471 ]' 00:19:28.471 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.732 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.994 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:28.994 07:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:29.565 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.566 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.826 07:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.087 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.347 { 00:19:30.347 "cntlid": 45, 00:19:30.347 "qid": 0, 00:19:30.347 "state": "enabled", 00:19:30.347 "thread": "nvmf_tgt_poll_group_000", 00:19:30.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:30.347 "listen_address": { 00:19:30.347 "trtype": "TCP", 00:19:30.347 "adrfam": "IPv4", 00:19:30.347 "traddr": "10.0.0.2", 00:19:30.347 "trsvcid": "4420" 00:19:30.347 }, 00:19:30.347 "peer_address": { 00:19:30.347 "trtype": "TCP", 00:19:30.347 "adrfam": "IPv4", 00:19:30.347 "traddr": "10.0.0.1", 00:19:30.347 "trsvcid": "42830" 00:19:30.347 }, 00:19:30.347 "auth": { 00:19:30.347 "state": "completed", 00:19:30.347 "digest": "sha256", 00:19:30.347 "dhgroup": "ffdhe8192" 00:19:30.347 } 00:19:30.347 } 00:19:30.347 ]' 00:19:30.347 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.608 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.869 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:30.869 07:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.441 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:31.703 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:19:31.703 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.703 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:31.703 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:31.703 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.704 07:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:31.965 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.226 { 00:19:32.226 "cntlid": 47, 00:19:32.226 "qid": 0, 00:19:32.226 "state": "enabled", 00:19:32.226 "thread": "nvmf_tgt_poll_group_000", 00:19:32.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:32.226 "listen_address": { 00:19:32.226 "trtype": "TCP", 00:19:32.226 "adrfam": "IPv4", 00:19:32.226 "traddr": "10.0.0.2", 00:19:32.226 "trsvcid": "4420" 00:19:32.226 }, 00:19:32.226 "peer_address": { 00:19:32.226 "trtype": "TCP", 00:19:32.226 "adrfam": "IPv4", 00:19:32.226 "traddr": "10.0.0.1", 00:19:32.226 "trsvcid": "42852" 00:19:32.226 }, 00:19:32.226 "auth": { 00:19:32.226 "state": "completed", 00:19:32.226 "digest": "sha256", 00:19:32.226 "dhgroup": "ffdhe8192" 00:19:32.226 } 00:19:32.226 } 00:19:32.226 ]' 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.226 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.487 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.748 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:32.748 07:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.320 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.582 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.843 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.843 { 00:19:33.843 "cntlid": 49, 00:19:33.843 "qid": 0, 00:19:33.843 "state": "enabled", 00:19:33.843 "thread": "nvmf_tgt_poll_group_000", 00:19:33.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.843 "listen_address": { 00:19:33.843 "trtype": "TCP", 00:19:33.843 "adrfam": "IPv4", 00:19:33.843 "traddr": "10.0.0.2", 00:19:33.843 "trsvcid": "4420" 00:19:33.843 }, 00:19:33.843 "peer_address": { 00:19:33.843 "trtype": "TCP", 00:19:33.843 "adrfam": "IPv4", 00:19:33.843 "traddr": "10.0.0.1", 00:19:33.843 "trsvcid": "42864" 00:19:33.843 }, 00:19:33.843 "auth": { 00:19:33.843 "state": "completed", 00:19:33.843 "digest": "sha384", 00:19:33.843 "dhgroup": "null" 00:19:33.843 } 00:19:33.843 } 00:19:33.843 ]' 00:19:33.843 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.104 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.104 07:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.104 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:34.104 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.104 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.104 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.104 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.364 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:34.364 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.936 07:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.197 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.198 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.198 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.458 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.458 { 00:19:35.458 "cntlid": 51, 00:19:35.458 "qid": 0, 00:19:35.458 "state": "enabled", 00:19:35.458 "thread": "nvmf_tgt_poll_group_000", 00:19:35.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.458 "listen_address": { 00:19:35.458 "trtype": "TCP", 00:19:35.458 "adrfam": "IPv4", 00:19:35.458 "traddr": "10.0.0.2", 00:19:35.458 "trsvcid": "4420" 00:19:35.458 }, 00:19:35.458 "peer_address": { 00:19:35.458 "trtype": "TCP", 00:19:35.458 "adrfam": "IPv4", 00:19:35.458 "traddr": "10.0.0.1", 00:19:35.458 "trsvcid": "42878" 00:19:35.458 }, 00:19:35.458 "auth": { 00:19:35.458 "state": "completed", 00:19:35.458 "digest": "sha384", 00:19:35.458 "dhgroup": "null" 00:19:35.458 } 00:19:35.458 } 00:19:35.458 ]' 00:19:35.458 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.719 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.980 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:35.980 07:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.552 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.813 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.074 00:19:37.074 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.074 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.074 07:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.074 { 00:19:37.074 "cntlid": 53, 00:19:37.074 "qid": 0, 00:19:37.074 "state": "enabled", 00:19:37.074 "thread": "nvmf_tgt_poll_group_000", 00:19:37.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.074 "listen_address": { 00:19:37.074 "trtype": "TCP", 00:19:37.074 "adrfam": "IPv4", 00:19:37.074 "traddr": "10.0.0.2", 00:19:37.074 "trsvcid": "4420" 00:19:37.074 }, 00:19:37.074 "peer_address": { 00:19:37.074 "trtype": "TCP", 00:19:37.074 "adrfam": "IPv4", 00:19:37.074 "traddr": "10.0.0.1", 00:19:37.074 "trsvcid": "39310" 00:19:37.074 }, 00:19:37.074 "auth": { 00:19:37.074 "state": "completed", 00:19:37.074 "digest": "sha384", 00:19:37.074 "dhgroup": "null" 00:19:37.074 } 00:19:37.074 } 00:19:37.074 ]' 00:19:37.074 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.335 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.595 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:37.595 07:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.166 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.427 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.688 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.688 { 00:19:38.688 "cntlid": 55, 00:19:38.688 "qid": 0, 00:19:38.688 "state": "enabled", 00:19:38.688 "thread": "nvmf_tgt_poll_group_000", 00:19:38.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:38.688 "listen_address": { 00:19:38.688 "trtype": "TCP", 00:19:38.688 "adrfam": "IPv4", 00:19:38.688 "traddr": "10.0.0.2", 00:19:38.688 "trsvcid": "4420" 00:19:38.688 }, 00:19:38.688 "peer_address": { 00:19:38.688 "trtype": "TCP", 00:19:38.688 "adrfam": "IPv4", 00:19:38.688 "traddr": "10.0.0.1", 00:19:38.688 "trsvcid": "39356" 00:19:38.688 }, 00:19:38.688 "auth": { 00:19:38.688 "state": "completed", 00:19:38.688 "digest": "sha384", 00:19:38.688 "dhgroup": "null" 00:19:38.688 } 00:19:38.688 } 00:19:38.688 ]' 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.688 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.948 07:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.209 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:39.209 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:39.783 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.043 07:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.043 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.303 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.304 { 00:19:40.304 "cntlid": 57, 00:19:40.304 "qid": 0, 00:19:40.304 "state": "enabled", 00:19:40.304 "thread": "nvmf_tgt_poll_group_000", 00:19:40.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.304 "listen_address": { 00:19:40.304 "trtype": "TCP", 00:19:40.304 "adrfam": "IPv4", 00:19:40.304 "traddr": "10.0.0.2", 00:19:40.304 "trsvcid": "4420" 00:19:40.304 }, 00:19:40.304 "peer_address": { 00:19:40.304 "trtype": "TCP", 00:19:40.304 "adrfam": "IPv4", 00:19:40.304 "traddr": "10.0.0.1", 00:19:40.304 "trsvcid": "39390" 00:19:40.304 }, 00:19:40.304 "auth": { 00:19:40.304 "state": "completed", 00:19:40.304 "digest": "sha384", 00:19:40.304 "dhgroup": "ffdhe2048" 00:19:40.304 } 00:19:40.304 } 00:19:40.304 ]' 00:19:40.304 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.304 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.304 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:40.564 07:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.506 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.767 00:19:41.767 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.767 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.767 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.052 { 00:19:42.052 "cntlid": 59, 00:19:42.052 "qid": 0, 00:19:42.052 "state": "enabled", 00:19:42.052 "thread": "nvmf_tgt_poll_group_000", 00:19:42.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:42.052 "listen_address": { 00:19:42.052 "trtype": "TCP", 00:19:42.052 "adrfam": "IPv4", 00:19:42.052 "traddr": "10.0.0.2", 00:19:42.052 "trsvcid": "4420" 00:19:42.052 }, 00:19:42.052 "peer_address": { 00:19:42.052 "trtype": "TCP", 00:19:42.052 "adrfam": "IPv4", 00:19:42.052 "traddr": "10.0.0.1", 00:19:42.052 "trsvcid": "39398" 00:19:42.052 }, 00:19:42.052 "auth": { 00:19:42.052 "state": "completed", 00:19:42.052 "digest": "sha384", 00:19:42.052 "dhgroup": "ffdhe2048" 00:19:42.052 } 00:19:42.052 } 00:19:42.052 ]' 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.052 07:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.052 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.052 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.052 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.313 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:42.313 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:42.885 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.885 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.885 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.885 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.885 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.886 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.886 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.886 07:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.146 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.484 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.484 { 00:19:43.484 "cntlid": 61, 00:19:43.484 "qid": 0, 00:19:43.484 "state": "enabled", 00:19:43.484 "thread": "nvmf_tgt_poll_group_000", 00:19:43.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.484 "listen_address": { 00:19:43.484 "trtype": "TCP", 00:19:43.484 "adrfam": "IPv4", 00:19:43.484 "traddr": "10.0.0.2", 00:19:43.484 "trsvcid": "4420" 00:19:43.484 }, 00:19:43.484 "peer_address": { 00:19:43.484 "trtype": "TCP", 00:19:43.484 "adrfam": "IPv4", 00:19:43.484 "traddr": "10.0.0.1", 00:19:43.484 "trsvcid": "39420" 00:19:43.484 }, 00:19:43.484 "auth": { 00:19:43.484 "state": "completed", 00:19:43.484 "digest": "sha384", 00:19:43.484 "dhgroup": "ffdhe2048" 00:19:43.484 } 00:19:43.484 } 00:19:43.484 ]' 00:19:43.484 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:43.767 07:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.719 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.981 00:19:44.981 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.981 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.981 07:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.243 { 00:19:45.243 "cntlid": 63, 00:19:45.243 "qid": 0, 00:19:45.243 "state": "enabled", 00:19:45.243 "thread": "nvmf_tgt_poll_group_000", 00:19:45.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.243 "listen_address": { 00:19:45.243 "trtype": "TCP", 00:19:45.243 "adrfam": "IPv4", 00:19:45.243 "traddr": "10.0.0.2", 00:19:45.243 "trsvcid": "4420" 00:19:45.243 }, 00:19:45.243 "peer_address": { 00:19:45.243 "trtype": "TCP", 00:19:45.243 "adrfam": "IPv4", 00:19:45.243 "traddr": "10.0.0.1", 00:19:45.243 "trsvcid": "39450" 00:19:45.243 }, 00:19:45.243 "auth": { 00:19:45.243 "state": "completed", 00:19:45.243 "digest": "sha384", 00:19:45.243 "dhgroup": "ffdhe2048" 00:19:45.243 } 00:19:45.243 } 00:19:45.243 ]' 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.243 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.505 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:45.505 07:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.077 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.338 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.599 00:19:46.599 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.599 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.599 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.860 { 00:19:46.860 "cntlid": 65, 00:19:46.860 "qid": 0, 00:19:46.860 "state": "enabled", 00:19:46.860 "thread": "nvmf_tgt_poll_group_000", 00:19:46.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.860 "listen_address": { 00:19:46.860 "trtype": "TCP", 00:19:46.860 "adrfam": "IPv4", 00:19:46.860 "traddr": "10.0.0.2", 00:19:46.860 "trsvcid": "4420" 00:19:46.860 }, 00:19:46.860 "peer_address": { 00:19:46.860 "trtype": "TCP", 00:19:46.860 "adrfam": "IPv4", 00:19:46.860 "traddr": "10.0.0.1", 00:19:46.860 "trsvcid": "39464" 00:19:46.860 }, 00:19:46.860 "auth": { 00:19:46.860 "state": "completed", 00:19:46.860 "digest": "sha384", 00:19:46.860 "dhgroup": "ffdhe3072" 00:19:46.860 } 00:19:46.860 } 00:19:46.860 ]' 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.860 07:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.121 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:47.121 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.692 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:47.954 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:47.954 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.955 07:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.216 00:19:48.216 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.216 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.216 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.477 { 00:19:48.477 "cntlid": 67, 00:19:48.477 "qid": 0, 00:19:48.477 "state": "enabled", 00:19:48.477 "thread": "nvmf_tgt_poll_group_000", 00:19:48.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:48.477 "listen_address": { 00:19:48.477 "trtype": "TCP", 00:19:48.477 "adrfam": "IPv4", 00:19:48.477 "traddr": "10.0.0.2", 00:19:48.477 "trsvcid": "4420" 00:19:48.477 }, 00:19:48.477 "peer_address": { 00:19:48.477 "trtype": "TCP", 00:19:48.477 "adrfam": "IPv4", 00:19:48.477 "traddr": "10.0.0.1", 00:19:48.477 "trsvcid": "32930" 00:19:48.477 }, 00:19:48.477 "auth": { 00:19:48.477 "state": "completed", 00:19:48.477 "digest": "sha384", 00:19:48.477 "dhgroup": "ffdhe3072" 00:19:48.477 } 00:19:48.477 } 00:19:48.477 ]' 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.477 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.738 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:48.738 07:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:49.311 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.572 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.833 00:19:49.833 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.833 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.833 07:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.094 { 00:19:50.094 "cntlid": 69, 00:19:50.094 "qid": 0, 00:19:50.094 "state": "enabled", 00:19:50.094 "thread": "nvmf_tgt_poll_group_000", 00:19:50.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:50.094 "listen_address": { 00:19:50.094 "trtype": "TCP", 00:19:50.094 "adrfam": "IPv4", 00:19:50.094 "traddr": "10.0.0.2", 00:19:50.094 "trsvcid": "4420" 00:19:50.094 }, 00:19:50.094 "peer_address": { 00:19:50.094 "trtype": "TCP", 00:19:50.094 "adrfam": "IPv4", 00:19:50.094 "traddr": "10.0.0.1", 00:19:50.094 "trsvcid": "32952" 00:19:50.094 }, 00:19:50.094 "auth": { 00:19:50.094 "state": "completed", 00:19:50.094 "digest": "sha384", 00:19:50.094 "dhgroup": "ffdhe3072" 00:19:50.094 } 00:19:50.094 } 00:19:50.094 ]' 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.094 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.354 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.354 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.354 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.354 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:50.354 07:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.300 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.301 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.562 00:19:51.562 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.562 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.562 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.823 { 00:19:51.823 "cntlid": 71, 00:19:51.823 "qid": 0, 00:19:51.823 "state": "enabled", 00:19:51.823 "thread": "nvmf_tgt_poll_group_000", 00:19:51.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:51.823 "listen_address": { 00:19:51.823 "trtype": "TCP", 00:19:51.823 "adrfam": "IPv4", 00:19:51.823 "traddr": "10.0.0.2", 00:19:51.823 "trsvcid": "4420" 00:19:51.823 }, 00:19:51.823 "peer_address": { 00:19:51.823 "trtype": "TCP", 00:19:51.823 "adrfam": "IPv4", 00:19:51.823 "traddr": "10.0.0.1", 00:19:51.823 "trsvcid": "32980" 00:19:51.823 }, 00:19:51.823 "auth": { 00:19:51.823 "state": "completed", 00:19:51.823 "digest": "sha384", 00:19:51.823 "dhgroup": "ffdhe3072" 00:19:51.823 } 00:19:51.823 } 00:19:51.823 ]' 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.823 07:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.085 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:52.085 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.657 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.918 07:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.178 00:19:53.178 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.178 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.178 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.438 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.438 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.439 { 00:19:53.439 "cntlid": 73, 00:19:53.439 "qid": 0, 00:19:53.439 "state": "enabled", 00:19:53.439 "thread": "nvmf_tgt_poll_group_000", 00:19:53.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:53.439 "listen_address": { 00:19:53.439 "trtype": "TCP", 00:19:53.439 "adrfam": "IPv4", 00:19:53.439 "traddr": "10.0.0.2", 00:19:53.439 "trsvcid": "4420" 00:19:53.439 }, 00:19:53.439 "peer_address": { 00:19:53.439 "trtype": "TCP", 00:19:53.439 "adrfam": "IPv4", 00:19:53.439 "traddr": "10.0.0.1", 00:19:53.439 "trsvcid": "33012" 00:19:53.439 }, 00:19:53.439 "auth": { 00:19:53.439 "state": "completed", 00:19:53.439 "digest": "sha384", 00:19:53.439 "dhgroup": "ffdhe4096" 00:19:53.439 } 00:19:53.439 } 00:19:53.439 ]' 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.439 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.699 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:53.699 07:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.269 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.270 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.270 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:54.530 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.531 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.791 00:19:54.791 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.791 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.791 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.052 { 00:19:55.052 "cntlid": 75, 00:19:55.052 "qid": 0, 00:19:55.052 "state": "enabled", 00:19:55.052 "thread": "nvmf_tgt_poll_group_000", 00:19:55.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:55.052 "listen_address": { 00:19:55.052 "trtype": "TCP", 00:19:55.052 "adrfam": "IPv4", 00:19:55.052 "traddr": "10.0.0.2", 00:19:55.052 "trsvcid": "4420" 00:19:55.052 }, 00:19:55.052 "peer_address": { 00:19:55.052 "trtype": "TCP", 00:19:55.052 "adrfam": "IPv4", 00:19:55.052 "traddr": "10.0.0.1", 00:19:55.052 "trsvcid": "33028" 00:19:55.052 }, 00:19:55.052 "auth": { 00:19:55.052 "state": "completed", 00:19:55.052 "digest": "sha384", 00:19:55.052 "dhgroup": "ffdhe4096" 00:19:55.052 } 00:19:55.052 } 00:19:55.052 ]' 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.052 07:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.052 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.052 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.052 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.052 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.052 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.313 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:55.313 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.884 07:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.145 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.406 00:19:56.406 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.406 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.406 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.666 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.666 { 00:19:56.666 "cntlid": 77, 00:19:56.666 "qid": 0, 00:19:56.666 "state": "enabled", 00:19:56.666 "thread": "nvmf_tgt_poll_group_000", 00:19:56.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:56.666 "listen_address": { 00:19:56.666 "trtype": "TCP", 00:19:56.666 "adrfam": "IPv4", 00:19:56.666 "traddr": "10.0.0.2", 00:19:56.666 "trsvcid": "4420" 00:19:56.666 }, 00:19:56.666 "peer_address": { 00:19:56.666 "trtype": "TCP", 00:19:56.666 "adrfam": "IPv4", 00:19:56.667 "traddr": "10.0.0.1", 00:19:56.667 "trsvcid": "33048" 00:19:56.667 }, 00:19:56.667 "auth": { 00:19:56.667 "state": "completed", 00:19:56.667 "digest": "sha384", 00:19:56.667 "dhgroup": "ffdhe4096" 00:19:56.667 } 00:19:56.667 } 00:19:56.667 ]' 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.667 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.927 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:56.927 07:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.498 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.759 07:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.020 00:19:58.020 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.020 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.020 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.281 { 00:19:58.281 "cntlid": 79, 00:19:58.281 "qid": 0, 00:19:58.281 "state": "enabled", 00:19:58.281 "thread": "nvmf_tgt_poll_group_000", 00:19:58.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:58.281 "listen_address": { 00:19:58.281 "trtype": "TCP", 00:19:58.281 "adrfam": "IPv4", 00:19:58.281 "traddr": "10.0.0.2", 00:19:58.281 "trsvcid": "4420" 00:19:58.281 }, 00:19:58.281 "peer_address": { 00:19:58.281 "trtype": "TCP", 00:19:58.281 "adrfam": "IPv4", 00:19:58.281 "traddr": "10.0.0.1", 00:19:58.281 "trsvcid": "45620" 00:19:58.281 }, 00:19:58.281 "auth": { 00:19:58.281 "state": "completed", 00:19:58.281 "digest": "sha384", 00:19:58.281 "dhgroup": "ffdhe4096" 00:19:58.281 } 00:19:58.281 } 00:19:58.281 ]' 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.281 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.542 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:58.542 07:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:19:59.113 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.375 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.946 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.946 { 00:19:59.946 "cntlid": 81, 00:19:59.946 "qid": 0, 00:19:59.946 "state": "enabled", 00:19:59.946 "thread": "nvmf_tgt_poll_group_000", 00:19:59.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:59.946 "listen_address": { 00:19:59.946 "trtype": "TCP", 00:19:59.946 "adrfam": "IPv4", 00:19:59.946 "traddr": "10.0.0.2", 00:19:59.946 "trsvcid": "4420" 00:19:59.946 }, 00:19:59.946 "peer_address": { 00:19:59.946 "trtype": "TCP", 00:19:59.946 "adrfam": "IPv4", 00:19:59.946 "traddr": "10.0.0.1", 00:19:59.946 "trsvcid": "45642" 00:19:59.946 }, 00:19:59.946 "auth": { 00:19:59.946 "state": "completed", 00:19:59.946 "digest": "sha384", 00:19:59.946 "dhgroup": "ffdhe6144" 00:19:59.946 } 00:19:59.946 } 00:19:59.946 ]' 00:19:59.946 07:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.946 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:59.946 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.207 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.207 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.208 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.208 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.208 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.208 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:00.208 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.151 07:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.151 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.412 00:20:01.412 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.412 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.412 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.674 { 00:20:01.674 "cntlid": 83, 00:20:01.674 "qid": 0, 00:20:01.674 "state": "enabled", 00:20:01.674 "thread": "nvmf_tgt_poll_group_000", 00:20:01.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:01.674 "listen_address": { 00:20:01.674 "trtype": "TCP", 00:20:01.674 "adrfam": "IPv4", 00:20:01.674 "traddr": "10.0.0.2", 00:20:01.674 "trsvcid": "4420" 00:20:01.674 }, 00:20:01.674 "peer_address": { 00:20:01.674 "trtype": "TCP", 00:20:01.674 "adrfam": "IPv4", 00:20:01.674 "traddr": "10.0.0.1", 00:20:01.674 "trsvcid": "45658" 00:20:01.674 }, 00:20:01.674 "auth": { 00:20:01.674 "state": "completed", 00:20:01.674 "digest": "sha384", 00:20:01.674 "dhgroup": "ffdhe6144" 00:20:01.674 } 00:20:01.674 } 00:20:01.674 ]' 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.674 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.935 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.935 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.935 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.935 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.935 07:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.935 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:01.935 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:02.878 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.878 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.878 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.878 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.879 07:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.140 00:20:03.140 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.140 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.140 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.400 { 00:20:03.400 "cntlid": 85, 00:20:03.400 "qid": 0, 00:20:03.400 "state": "enabled", 00:20:03.400 "thread": "nvmf_tgt_poll_group_000", 00:20:03.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:03.400 "listen_address": { 00:20:03.400 "trtype": "TCP", 00:20:03.400 "adrfam": "IPv4", 00:20:03.400 "traddr": "10.0.0.2", 00:20:03.400 "trsvcid": "4420" 00:20:03.400 }, 00:20:03.400 "peer_address": { 00:20:03.400 "trtype": "TCP", 00:20:03.400 "adrfam": "IPv4", 00:20:03.400 "traddr": "10.0.0.1", 00:20:03.400 "trsvcid": "45676" 00:20:03.400 }, 00:20:03.400 "auth": { 00:20:03.400 "state": "completed", 00:20:03.400 "digest": "sha384", 00:20:03.400 "dhgroup": "ffdhe6144" 00:20:03.400 } 00:20:03.400 } 00:20:03.400 ]' 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.400 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.661 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.661 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.661 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.661 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:03.661 07:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:04.233 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:04.494 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.495 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.068 00:20:05.068 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.068 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.068 07:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.068 { 00:20:05.068 "cntlid": 87, 00:20:05.068 "qid": 0, 00:20:05.068 "state": "enabled", 00:20:05.068 "thread": "nvmf_tgt_poll_group_000", 00:20:05.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.068 "listen_address": { 00:20:05.068 "trtype": "TCP", 00:20:05.068 "adrfam": "IPv4", 00:20:05.068 "traddr": "10.0.0.2", 00:20:05.068 "trsvcid": "4420" 00:20:05.068 }, 00:20:05.068 "peer_address": { 00:20:05.068 "trtype": "TCP", 00:20:05.068 "adrfam": "IPv4", 00:20:05.068 "traddr": "10.0.0.1", 00:20:05.068 "trsvcid": "45704" 00:20:05.068 }, 00:20:05.068 "auth": { 00:20:05.068 "state": "completed", 00:20:05.068 "digest": "sha384", 00:20:05.068 "dhgroup": "ffdhe6144" 00:20:05.068 } 00:20:05.068 } 00:20:05.068 ]' 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.068 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:05.331 07:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.273 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.845 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.845 { 00:20:06.845 "cntlid": 89, 00:20:06.845 "qid": 0, 00:20:06.845 "state": "enabled", 00:20:06.845 "thread": "nvmf_tgt_poll_group_000", 00:20:06.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.845 "listen_address": { 00:20:06.845 "trtype": "TCP", 00:20:06.845 "adrfam": "IPv4", 00:20:06.845 "traddr": "10.0.0.2", 00:20:06.845 "trsvcid": "4420" 00:20:06.845 }, 00:20:06.845 "peer_address": { 00:20:06.845 "trtype": "TCP", 00:20:06.845 "adrfam": "IPv4", 00:20:06.845 "traddr": "10.0.0.1", 00:20:06.845 "trsvcid": "45726" 00:20:06.845 }, 00:20:06.845 "auth": { 00:20:06.845 "state": "completed", 00:20:06.845 "digest": "sha384", 00:20:06.845 "dhgroup": "ffdhe8192" 00:20:06.845 } 00:20:06.845 } 00:20:06.845 ]' 00:20:06.845 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.106 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.106 07:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.106 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.106 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.106 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.106 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.106 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.367 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:07.367 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.939 07:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.199 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.461 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.722 { 00:20:08.722 "cntlid": 91, 00:20:08.722 "qid": 0, 00:20:08.722 "state": "enabled", 00:20:08.722 "thread": "nvmf_tgt_poll_group_000", 00:20:08.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:08.722 "listen_address": { 00:20:08.722 "trtype": "TCP", 00:20:08.722 "adrfam": "IPv4", 00:20:08.722 "traddr": "10.0.0.2", 00:20:08.722 "trsvcid": "4420" 00:20:08.722 }, 00:20:08.722 "peer_address": { 00:20:08.722 "trtype": "TCP", 00:20:08.722 "adrfam": "IPv4", 00:20:08.722 "traddr": "10.0.0.1", 00:20:08.722 "trsvcid": "33126" 00:20:08.722 }, 00:20:08.722 "auth": { 00:20:08.722 "state": "completed", 00:20:08.722 "digest": "sha384", 00:20:08.722 "dhgroup": "ffdhe8192" 00:20:08.722 } 00:20:08.722 } 00:20:08.722 ]' 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.722 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.983 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.983 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.983 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.983 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.983 07:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.243 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:09.243 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.815 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.076 07:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.335 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.596 { 00:20:10.596 "cntlid": 93, 00:20:10.596 "qid": 0, 00:20:10.596 "state": "enabled", 00:20:10.596 "thread": "nvmf_tgt_poll_group_000", 00:20:10.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.596 "listen_address": { 00:20:10.596 "trtype": "TCP", 00:20:10.596 "adrfam": "IPv4", 00:20:10.596 "traddr": "10.0.0.2", 00:20:10.596 "trsvcid": "4420" 00:20:10.596 }, 00:20:10.596 "peer_address": { 00:20:10.596 "trtype": "TCP", 00:20:10.596 "adrfam": "IPv4", 00:20:10.596 "traddr": "10.0.0.1", 00:20:10.596 "trsvcid": "33154" 00:20:10.596 }, 00:20:10.596 "auth": { 00:20:10.596 "state": "completed", 00:20:10.596 "digest": "sha384", 00:20:10.596 "dhgroup": "ffdhe8192" 00:20:10.596 } 00:20:10.596 } 00:20:10.596 ]' 00:20:10.596 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.857 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.116 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:11.116 07:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.712 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.972 07:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.232 00:20:12.232 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.232 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.232 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.492 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.492 { 00:20:12.492 "cntlid": 95, 00:20:12.492 "qid": 0, 00:20:12.492 "state": "enabled", 00:20:12.492 "thread": "nvmf_tgt_poll_group_000", 00:20:12.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:12.492 "listen_address": { 00:20:12.492 "trtype": "TCP", 00:20:12.492 "adrfam": "IPv4", 00:20:12.492 "traddr": "10.0.0.2", 00:20:12.492 "trsvcid": "4420" 00:20:12.492 }, 00:20:12.492 "peer_address": { 00:20:12.492 "trtype": "TCP", 00:20:12.492 "adrfam": "IPv4", 00:20:12.492 "traddr": "10.0.0.1", 00:20:12.492 "trsvcid": "33178" 00:20:12.492 }, 00:20:12.492 "auth": { 00:20:12.492 "state": "completed", 00:20:12.492 "digest": "sha384", 00:20:12.493 "dhgroup": "ffdhe8192" 00:20:12.493 } 00:20:12.493 } 00:20:12.493 ]' 00:20:12.493 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.493 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.493 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:12.753 07:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.694 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.695 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.956 00:20:13.956 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.956 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.956 07:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.218 { 00:20:14.218 "cntlid": 97, 00:20:14.218 "qid": 0, 00:20:14.218 "state": "enabled", 00:20:14.218 "thread": "nvmf_tgt_poll_group_000", 00:20:14.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:14.218 "listen_address": { 00:20:14.218 "trtype": "TCP", 00:20:14.218 "adrfam": "IPv4", 00:20:14.218 "traddr": "10.0.0.2", 00:20:14.218 "trsvcid": "4420" 00:20:14.218 }, 00:20:14.218 "peer_address": { 00:20:14.218 "trtype": "TCP", 00:20:14.218 "adrfam": "IPv4", 00:20:14.218 "traddr": "10.0.0.1", 00:20:14.218 "trsvcid": "33214" 00:20:14.218 }, 00:20:14.218 "auth": { 00:20:14.218 "state": "completed", 00:20:14.218 "digest": "sha512", 00:20:14.218 "dhgroup": "null" 00:20:14.218 } 00:20:14.218 } 00:20:14.218 ]' 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.218 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.478 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:14.478 07:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:15.051 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.312 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.574 00:20:15.574 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.574 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.574 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.835 { 00:20:15.835 "cntlid": 99, 00:20:15.835 "qid": 0, 00:20:15.835 "state": "enabled", 00:20:15.835 "thread": "nvmf_tgt_poll_group_000", 00:20:15.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:15.835 "listen_address": { 00:20:15.835 "trtype": "TCP", 00:20:15.835 "adrfam": "IPv4", 00:20:15.835 "traddr": "10.0.0.2", 00:20:15.835 "trsvcid": "4420" 00:20:15.835 }, 00:20:15.835 "peer_address": { 00:20:15.835 "trtype": "TCP", 00:20:15.835 "adrfam": "IPv4", 00:20:15.835 "traddr": "10.0.0.1", 00:20:15.835 "trsvcid": "33248" 00:20:15.835 }, 00:20:15.835 "auth": { 00:20:15.835 "state": "completed", 00:20:15.835 "digest": "sha512", 00:20:15.835 "dhgroup": "null" 00:20:15.835 } 00:20:15.835 } 00:20:15.835 ]' 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.835 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.096 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:16.096 07:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.666 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.926 07:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.186 00:20:17.186 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.186 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.186 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.446 { 00:20:17.446 "cntlid": 101, 00:20:17.446 "qid": 0, 00:20:17.446 "state": "enabled", 00:20:17.446 "thread": "nvmf_tgt_poll_group_000", 00:20:17.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:17.446 "listen_address": { 00:20:17.446 "trtype": "TCP", 00:20:17.446 "adrfam": "IPv4", 00:20:17.446 "traddr": "10.0.0.2", 00:20:17.446 "trsvcid": "4420" 00:20:17.446 }, 00:20:17.446 "peer_address": { 00:20:17.446 "trtype": "TCP", 00:20:17.446 "adrfam": "IPv4", 00:20:17.446 "traddr": "10.0.0.1", 00:20:17.446 "trsvcid": "35462" 00:20:17.446 }, 00:20:17.446 "auth": { 00:20:17.446 "state": "completed", 00:20:17.446 "digest": "sha512", 00:20:17.446 "dhgroup": "null" 00:20:17.446 } 00:20:17.446 } 00:20:17.446 ]' 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.446 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.706 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:17.706 07:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.277 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.538 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.798 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.798 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.798 { 00:20:18.798 "cntlid": 103, 00:20:18.798 "qid": 0, 00:20:18.798 "state": "enabled", 00:20:18.798 "thread": "nvmf_tgt_poll_group_000", 00:20:18.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:18.798 "listen_address": { 00:20:18.798 "trtype": "TCP", 00:20:18.798 "adrfam": "IPv4", 00:20:18.798 "traddr": "10.0.0.2", 00:20:18.798 "trsvcid": "4420" 00:20:18.798 }, 00:20:18.798 "peer_address": { 00:20:18.798 "trtype": "TCP", 00:20:18.798 "adrfam": "IPv4", 00:20:18.798 "traddr": "10.0.0.1", 00:20:18.798 "trsvcid": "35484" 00:20:18.798 }, 00:20:18.798 "auth": { 00:20:18.798 "state": "completed", 00:20:18.799 "digest": "sha512", 00:20:18.799 "dhgroup": "null" 00:20:18.799 } 00:20:18.799 } 00:20:18.799 ]' 00:20:18.799 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.059 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.059 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.060 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:19.060 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.060 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.060 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.060 07:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.321 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:19.321 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.894 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.155 07:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.155 00:20:20.155 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.155 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.155 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.417 { 00:20:20.417 "cntlid": 105, 00:20:20.417 "qid": 0, 00:20:20.417 "state": "enabled", 00:20:20.417 "thread": "nvmf_tgt_poll_group_000", 00:20:20.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:20.417 "listen_address": { 00:20:20.417 "trtype": "TCP", 00:20:20.417 "adrfam": "IPv4", 00:20:20.417 "traddr": "10.0.0.2", 00:20:20.417 "trsvcid": "4420" 00:20:20.417 }, 00:20:20.417 "peer_address": { 00:20:20.417 "trtype": "TCP", 00:20:20.417 "adrfam": "IPv4", 00:20:20.417 "traddr": "10.0.0.1", 00:20:20.417 "trsvcid": "35518" 00:20:20.417 }, 00:20:20.417 "auth": { 00:20:20.417 "state": "completed", 00:20:20.417 "digest": "sha512", 00:20:20.417 "dhgroup": "ffdhe2048" 00:20:20.417 } 00:20:20.417 } 00:20:20.417 ]' 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.417 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.679 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.679 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.679 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.679 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:20.679 07:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.621 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.622 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.964 00:20:21.964 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.964 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.964 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.964 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.964 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.965 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.965 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.965 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.965 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.965 { 00:20:21.965 "cntlid": 107, 00:20:21.965 "qid": 0, 00:20:21.965 "state": "enabled", 00:20:21.965 "thread": "nvmf_tgt_poll_group_000", 00:20:21.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:21.965 "listen_address": { 00:20:21.965 "trtype": "TCP", 00:20:21.965 "adrfam": "IPv4", 00:20:21.965 "traddr": "10.0.0.2", 00:20:21.965 "trsvcid": "4420" 00:20:21.965 }, 00:20:21.965 "peer_address": { 00:20:21.965 "trtype": "TCP", 00:20:21.965 "adrfam": "IPv4", 00:20:21.965 "traddr": "10.0.0.1", 00:20:21.965 "trsvcid": "35546" 00:20:21.965 }, 00:20:21.965 "auth": { 00:20:21.965 "state": "completed", 00:20:21.965 "digest": "sha512", 00:20:21.965 "dhgroup": "ffdhe2048" 00:20:21.965 } 00:20:21.965 } 00:20:21.965 ]' 00:20:21.965 07:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:22.264 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.217 07:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.217 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.478 00:20:23.478 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.478 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.478 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.739 { 00:20:23.739 "cntlid": 109, 00:20:23.739 "qid": 0, 00:20:23.739 "state": "enabled", 00:20:23.739 "thread": "nvmf_tgt_poll_group_000", 00:20:23.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:23.739 "listen_address": { 00:20:23.739 "trtype": "TCP", 00:20:23.739 "adrfam": "IPv4", 00:20:23.739 "traddr": "10.0.0.2", 00:20:23.739 "trsvcid": "4420" 00:20:23.739 }, 00:20:23.739 "peer_address": { 00:20:23.739 "trtype": "TCP", 00:20:23.739 "adrfam": "IPv4", 00:20:23.739 "traddr": "10.0.0.1", 00:20:23.739 "trsvcid": "35570" 00:20:23.739 }, 00:20:23.739 "auth": { 00:20:23.739 "state": "completed", 00:20:23.739 "digest": "sha512", 00:20:23.739 "dhgroup": "ffdhe2048" 00:20:23.739 } 00:20:23.739 } 00:20:23.739 ]' 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.739 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.002 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:24.002 07:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.574 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.835 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.836 07:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.096 00:20:25.096 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.096 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.096 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.357 { 00:20:25.357 "cntlid": 111, 00:20:25.357 "qid": 0, 00:20:25.357 "state": "enabled", 00:20:25.357 "thread": "nvmf_tgt_poll_group_000", 00:20:25.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:25.357 "listen_address": { 00:20:25.357 "trtype": "TCP", 00:20:25.357 "adrfam": "IPv4", 00:20:25.357 "traddr": "10.0.0.2", 00:20:25.357 "trsvcid": "4420" 00:20:25.357 }, 00:20:25.357 "peer_address": { 00:20:25.357 "trtype": "TCP", 00:20:25.357 "adrfam": "IPv4", 00:20:25.357 "traddr": "10.0.0.1", 00:20:25.357 "trsvcid": "35594" 00:20:25.357 }, 00:20:25.357 "auth": { 00:20:25.357 "state": "completed", 00:20:25.357 "digest": "sha512", 00:20:25.357 "dhgroup": "ffdhe2048" 00:20:25.357 } 00:20:25.357 } 00:20:25.357 ]' 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.357 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.619 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:25.619 07:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.189 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.451 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.712 00:20:26.712 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.712 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.712 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.974 { 00:20:26.974 "cntlid": 113, 00:20:26.974 "qid": 0, 00:20:26.974 "state": "enabled", 00:20:26.974 "thread": "nvmf_tgt_poll_group_000", 00:20:26.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:26.974 "listen_address": { 00:20:26.974 "trtype": "TCP", 00:20:26.974 "adrfam": "IPv4", 00:20:26.974 "traddr": "10.0.0.2", 00:20:26.974 "trsvcid": "4420" 00:20:26.974 }, 00:20:26.974 "peer_address": { 00:20:26.974 "trtype": "TCP", 00:20:26.974 "adrfam": "IPv4", 00:20:26.974 "traddr": "10.0.0.1", 00:20:26.974 "trsvcid": "35622" 00:20:26.974 }, 00:20:26.974 "auth": { 00:20:26.974 "state": "completed", 00:20:26.974 "digest": "sha512", 00:20:26.974 "dhgroup": "ffdhe3072" 00:20:26.974 } 00:20:26.974 } 00:20:26.974 ]' 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.974 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.975 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.975 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.975 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.975 07:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.975 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.975 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.236 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:27.236 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.806 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:27.807 07:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.067 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.327 00:20:28.327 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.327 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.327 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.587 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.588 { 00:20:28.588 "cntlid": 115, 00:20:28.588 "qid": 0, 00:20:28.588 "state": "enabled", 00:20:28.588 "thread": "nvmf_tgt_poll_group_000", 00:20:28.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:28.588 "listen_address": { 00:20:28.588 "trtype": "TCP", 00:20:28.588 "adrfam": "IPv4", 00:20:28.588 "traddr": "10.0.0.2", 00:20:28.588 "trsvcid": "4420" 00:20:28.588 }, 00:20:28.588 "peer_address": { 00:20:28.588 "trtype": "TCP", 00:20:28.588 "adrfam": "IPv4", 00:20:28.588 "traddr": "10.0.0.1", 00:20:28.588 "trsvcid": "37250" 00:20:28.588 }, 00:20:28.588 "auth": { 00:20:28.588 "state": "completed", 00:20:28.588 "digest": "sha512", 00:20:28.588 "dhgroup": "ffdhe3072" 00:20:28.588 } 00:20:28.588 } 00:20:28.588 ]' 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.588 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.848 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:28.848 07:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.420 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.681 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.942 00:20:29.942 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.942 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.942 07:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.203 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.203 { 00:20:30.203 "cntlid": 117, 00:20:30.203 "qid": 0, 00:20:30.203 "state": "enabled", 00:20:30.203 "thread": "nvmf_tgt_poll_group_000", 00:20:30.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:30.203 "listen_address": { 00:20:30.203 "trtype": "TCP", 00:20:30.203 "adrfam": "IPv4", 00:20:30.203 "traddr": "10.0.0.2", 00:20:30.203 "trsvcid": "4420" 00:20:30.203 }, 00:20:30.203 "peer_address": { 00:20:30.203 "trtype": "TCP", 00:20:30.204 "adrfam": "IPv4", 00:20:30.204 "traddr": "10.0.0.1", 00:20:30.204 "trsvcid": "37276" 00:20:30.204 }, 00:20:30.204 "auth": { 00:20:30.204 "state": "completed", 00:20:30.204 "digest": "sha512", 00:20:30.204 "dhgroup": "ffdhe3072" 00:20:30.204 } 00:20:30.204 } 00:20:30.204 ]' 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.204 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.466 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:30.466 07:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.038 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.301 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.302 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.564 00:20:31.564 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.564 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.564 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.825 { 00:20:31.825 "cntlid": 119, 00:20:31.825 "qid": 0, 00:20:31.825 "state": "enabled", 00:20:31.825 "thread": "nvmf_tgt_poll_group_000", 00:20:31.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:31.825 "listen_address": { 00:20:31.825 "trtype": "TCP", 00:20:31.825 "adrfam": "IPv4", 00:20:31.825 "traddr": "10.0.0.2", 00:20:31.825 "trsvcid": "4420" 00:20:31.825 }, 00:20:31.825 "peer_address": { 00:20:31.825 "trtype": "TCP", 00:20:31.825 "adrfam": "IPv4", 00:20:31.825 "traddr": "10.0.0.1", 00:20:31.825 "trsvcid": "37302" 00:20:31.825 }, 00:20:31.825 "auth": { 00:20:31.825 "state": "completed", 00:20:31.825 "digest": "sha512", 00:20:31.825 "dhgroup": "ffdhe3072" 00:20:31.825 } 00:20:31.825 } 00:20:31.825 ]' 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.825 07:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.087 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:32.087 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.658 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.659 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.659 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.920 07:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.179 00:20:33.180 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.180 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.180 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.441 { 00:20:33.441 "cntlid": 121, 00:20:33.441 "qid": 0, 00:20:33.441 "state": "enabled", 00:20:33.441 "thread": "nvmf_tgt_poll_group_000", 00:20:33.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:33.441 "listen_address": { 00:20:33.441 "trtype": "TCP", 00:20:33.441 "adrfam": "IPv4", 00:20:33.441 "traddr": "10.0.0.2", 00:20:33.441 "trsvcid": "4420" 00:20:33.441 }, 00:20:33.441 "peer_address": { 00:20:33.441 "trtype": "TCP", 00:20:33.441 "adrfam": "IPv4", 00:20:33.441 "traddr": "10.0.0.1", 00:20:33.441 "trsvcid": "37326" 00:20:33.441 }, 00:20:33.441 "auth": { 00:20:33.441 "state": "completed", 00:20:33.441 "digest": "sha512", 00:20:33.441 "dhgroup": "ffdhe4096" 00:20:33.441 } 00:20:33.441 } 00:20:33.441 ]' 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.441 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.702 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:33.702 07:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.274 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:34.535 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.536 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.797 00:20:34.797 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.797 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.797 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.058 07:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.058 { 00:20:35.058 "cntlid": 123, 00:20:35.058 "qid": 0, 00:20:35.058 "state": "enabled", 00:20:35.058 "thread": "nvmf_tgt_poll_group_000", 00:20:35.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:35.058 "listen_address": { 00:20:35.058 "trtype": "TCP", 00:20:35.058 "adrfam": "IPv4", 00:20:35.058 "traddr": "10.0.0.2", 00:20:35.058 "trsvcid": "4420" 00:20:35.058 }, 00:20:35.058 "peer_address": { 00:20:35.058 "trtype": "TCP", 00:20:35.058 "adrfam": "IPv4", 00:20:35.058 "traddr": "10.0.0.1", 00:20:35.058 "trsvcid": "37348" 00:20:35.058 }, 00:20:35.058 "auth": { 00:20:35.058 "state": "completed", 00:20:35.058 "digest": "sha512", 00:20:35.058 "dhgroup": "ffdhe4096" 00:20:35.058 } 00:20:35.058 } 00:20:35.058 ]' 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.058 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.320 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.320 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.320 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.320 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:35.320 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:35.891 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.154 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.154 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.154 07:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.154 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.416 00:20:36.416 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.416 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.416 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.679 { 00:20:36.679 "cntlid": 125, 00:20:36.679 "qid": 0, 00:20:36.679 "state": "enabled", 00:20:36.679 "thread": "nvmf_tgt_poll_group_000", 00:20:36.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.679 "listen_address": { 00:20:36.679 "trtype": "TCP", 00:20:36.679 "adrfam": "IPv4", 00:20:36.679 "traddr": "10.0.0.2", 00:20:36.679 "trsvcid": "4420" 00:20:36.679 }, 00:20:36.679 "peer_address": { 00:20:36.679 "trtype": "TCP", 00:20:36.679 "adrfam": "IPv4", 00:20:36.679 "traddr": "10.0.0.1", 00:20:36.679 "trsvcid": "37376" 00:20:36.679 }, 00:20:36.679 "auth": { 00:20:36.679 "state": "completed", 00:20:36.679 "digest": "sha512", 00:20:36.679 "dhgroup": "ffdhe4096" 00:20:36.679 } 00:20:36.679 } 00:20:36.679 ]' 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.679 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.940 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:36.940 07:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.511 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.801 07:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.061 00:20:38.061 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.061 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.061 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.321 { 00:20:38.321 "cntlid": 127, 00:20:38.321 "qid": 0, 00:20:38.321 "state": "enabled", 00:20:38.321 "thread": "nvmf_tgt_poll_group_000", 00:20:38.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:38.321 "listen_address": { 00:20:38.321 "trtype": "TCP", 00:20:38.321 "adrfam": "IPv4", 00:20:38.321 "traddr": "10.0.0.2", 00:20:38.321 "trsvcid": "4420" 00:20:38.321 }, 00:20:38.321 "peer_address": { 00:20:38.321 "trtype": "TCP", 00:20:38.321 "adrfam": "IPv4", 00:20:38.321 "traddr": "10.0.0.1", 00:20:38.321 "trsvcid": "52894" 00:20:38.321 }, 00:20:38.321 "auth": { 00:20:38.321 "state": "completed", 00:20:38.321 "digest": "sha512", 00:20:38.321 "dhgroup": "ffdhe4096" 00:20:38.321 } 00:20:38.321 } 00:20:38.321 ]' 00:20:38.321 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.322 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.582 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:38.582 07:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.154 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.414 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.675 00:20:39.675 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.675 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.675 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.936 { 00:20:39.936 "cntlid": 129, 00:20:39.936 "qid": 0, 00:20:39.936 "state": "enabled", 00:20:39.936 "thread": "nvmf_tgt_poll_group_000", 00:20:39.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:39.936 "listen_address": { 00:20:39.936 "trtype": "TCP", 00:20:39.936 "adrfam": "IPv4", 00:20:39.936 "traddr": "10.0.0.2", 00:20:39.936 "trsvcid": "4420" 00:20:39.936 }, 00:20:39.936 "peer_address": { 00:20:39.936 "trtype": "TCP", 00:20:39.936 "adrfam": "IPv4", 00:20:39.936 "traddr": "10.0.0.1", 00:20:39.936 "trsvcid": "52920" 00:20:39.936 }, 00:20:39.936 "auth": { 00:20:39.936 "state": "completed", 00:20:39.936 "digest": "sha512", 00:20:39.936 "dhgroup": "ffdhe6144" 00:20:39.936 } 00:20:39.936 } 00:20:39.936 ]' 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.936 07:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:40.196 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.139 07:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.139 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.399 00:20:41.399 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.399 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.399 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.660 { 00:20:41.660 "cntlid": 131, 00:20:41.660 "qid": 0, 00:20:41.660 "state": "enabled", 00:20:41.660 "thread": "nvmf_tgt_poll_group_000", 00:20:41.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.660 "listen_address": { 00:20:41.660 "trtype": "TCP", 00:20:41.660 "adrfam": "IPv4", 00:20:41.660 "traddr": "10.0.0.2", 00:20:41.660 "trsvcid": "4420" 00:20:41.660 }, 00:20:41.660 "peer_address": { 00:20:41.660 "trtype": "TCP", 00:20:41.660 "adrfam": "IPv4", 00:20:41.660 "traddr": "10.0.0.1", 00:20:41.660 "trsvcid": "52958" 00:20:41.660 }, 00:20:41.660 "auth": { 00:20:41.660 "state": "completed", 00:20:41.660 "digest": "sha512", 00:20:41.660 "dhgroup": "ffdhe6144" 00:20:41.660 } 00:20:41.660 } 00:20:41.660 ]' 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.660 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.921 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.921 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.921 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.921 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:41.921 07:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:42.862 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.863 07:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.123 00:20:43.123 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.123 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.123 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.383 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.383 { 00:20:43.383 "cntlid": 133, 00:20:43.384 "qid": 0, 00:20:43.384 "state": "enabled", 00:20:43.384 "thread": "nvmf_tgt_poll_group_000", 00:20:43.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:43.384 "listen_address": { 00:20:43.384 "trtype": "TCP", 00:20:43.384 "adrfam": "IPv4", 00:20:43.384 "traddr": "10.0.0.2", 00:20:43.384 "trsvcid": "4420" 00:20:43.384 }, 00:20:43.384 "peer_address": { 00:20:43.384 "trtype": "TCP", 00:20:43.384 "adrfam": "IPv4", 00:20:43.384 "traddr": "10.0.0.1", 00:20:43.384 "trsvcid": "52994" 00:20:43.384 }, 00:20:43.384 "auth": { 00:20:43.384 "state": "completed", 00:20:43.384 "digest": "sha512", 00:20:43.384 "dhgroup": "ffdhe6144" 00:20:43.384 } 00:20:43.384 } 00:20:43.384 ]' 00:20:43.384 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.384 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.384 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.384 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:43.384 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.644 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.644 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.644 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.644 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:43.644 07:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.585 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.846 00:20:44.846 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.846 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.846 07:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.106 { 00:20:45.106 "cntlid": 135, 00:20:45.106 "qid": 0, 00:20:45.106 "state": "enabled", 00:20:45.106 "thread": "nvmf_tgt_poll_group_000", 00:20:45.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:45.106 "listen_address": { 00:20:45.106 "trtype": "TCP", 00:20:45.106 "adrfam": "IPv4", 00:20:45.106 "traddr": "10.0.0.2", 00:20:45.106 "trsvcid": "4420" 00:20:45.106 }, 00:20:45.106 "peer_address": { 00:20:45.106 "trtype": "TCP", 00:20:45.106 "adrfam": "IPv4", 00:20:45.106 "traddr": "10.0.0.1", 00:20:45.106 "trsvcid": "53012" 00:20:45.106 }, 00:20:45.106 "auth": { 00:20:45.106 "state": "completed", 00:20:45.106 "digest": "sha512", 00:20:45.106 "dhgroup": "ffdhe6144" 00:20:45.106 } 00:20:45.106 } 00:20:45.106 ]' 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.106 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.367 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:45.367 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:45.938 07:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.938 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.199 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.773 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.773 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.032 { 00:20:47.032 "cntlid": 137, 00:20:47.032 "qid": 0, 00:20:47.032 "state": "enabled", 00:20:47.032 "thread": "nvmf_tgt_poll_group_000", 00:20:47.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:47.032 "listen_address": { 00:20:47.032 "trtype": "TCP", 00:20:47.032 "adrfam": "IPv4", 00:20:47.032 "traddr": "10.0.0.2", 00:20:47.032 "trsvcid": "4420" 00:20:47.032 }, 00:20:47.032 "peer_address": { 00:20:47.032 "trtype": "TCP", 00:20:47.032 "adrfam": "IPv4", 00:20:47.032 "traddr": "10.0.0.1", 00:20:47.032 "trsvcid": "53038" 00:20:47.032 }, 00:20:47.032 "auth": { 00:20:47.032 "state": "completed", 00:20:47.032 "digest": "sha512", 00:20:47.032 "dhgroup": "ffdhe8192" 00:20:47.032 } 00:20:47.032 } 00:20:47.032 ]' 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.032 07:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.032 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.032 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.032 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.292 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:47.292 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.863 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.124 07:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.124 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.124 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.124 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.124 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.385 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.647 { 00:20:48.647 "cntlid": 139, 00:20:48.647 "qid": 0, 00:20:48.647 "state": "enabled", 00:20:48.647 "thread": "nvmf_tgt_poll_group_000", 00:20:48.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:48.647 "listen_address": { 00:20:48.647 "trtype": "TCP", 00:20:48.647 "adrfam": "IPv4", 00:20:48.647 "traddr": "10.0.0.2", 00:20:48.647 "trsvcid": "4420" 00:20:48.647 }, 00:20:48.647 "peer_address": { 00:20:48.647 "trtype": "TCP", 00:20:48.647 "adrfam": "IPv4", 00:20:48.647 "traddr": "10.0.0.1", 00:20:48.647 "trsvcid": "35782" 00:20:48.647 }, 00:20:48.647 "auth": { 00:20:48.647 "state": "completed", 00:20:48.647 "digest": "sha512", 00:20:48.647 "dhgroup": "ffdhe8192" 00:20:48.647 } 00:20:48.647 } 00:20:48.647 ]' 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.647 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:48.908 07:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: --dhchap-ctrl-secret DHHC-1:02:NjY5MzAxMzU3YjhjYTUyNjNkZTBkZGU0MzgyMTc5Y2EwMTBlNWIxODhmMDAyYTlkOkPr0A==: 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.850 07:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.422 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.422 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.683 { 00:20:50.683 "cntlid": 141, 00:20:50.683 "qid": 0, 00:20:50.683 "state": "enabled", 00:20:50.683 "thread": "nvmf_tgt_poll_group_000", 00:20:50.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:50.683 "listen_address": { 00:20:50.683 "trtype": "TCP", 00:20:50.683 "adrfam": "IPv4", 00:20:50.683 "traddr": "10.0.0.2", 00:20:50.683 "trsvcid": "4420" 00:20:50.683 }, 00:20:50.683 "peer_address": { 00:20:50.683 "trtype": "TCP", 00:20:50.683 "adrfam": "IPv4", 00:20:50.683 "traddr": "10.0.0.1", 00:20:50.683 "trsvcid": "35802" 00:20:50.683 }, 00:20:50.683 "auth": { 00:20:50.683 "state": "completed", 00:20:50.683 "digest": "sha512", 00:20:50.683 "dhgroup": "ffdhe8192" 00:20:50.683 } 00:20:50.683 } 00:20:50.683 ]' 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.683 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.944 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:50.944 07:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:01:NjY1OWY3ZmFhYjNlMTJkZGJiYzliNDc4MWI0ODFkYznWpxWQ: 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.516 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.777 07:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.349 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.349 { 00:20:52.349 "cntlid": 143, 00:20:52.349 "qid": 0, 00:20:52.349 "state": "enabled", 00:20:52.349 "thread": "nvmf_tgt_poll_group_000", 00:20:52.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:52.349 "listen_address": { 00:20:52.349 "trtype": "TCP", 00:20:52.349 "adrfam": "IPv4", 00:20:52.349 "traddr": "10.0.0.2", 00:20:52.349 "trsvcid": "4420" 00:20:52.349 }, 00:20:52.349 "peer_address": { 00:20:52.349 "trtype": "TCP", 00:20:52.349 "adrfam": "IPv4", 00:20:52.349 "traddr": "10.0.0.1", 00:20:52.349 "trsvcid": "35840" 00:20:52.349 }, 00:20:52.349 "auth": { 00:20:52.349 "state": "completed", 00:20:52.349 "digest": "sha512", 00:20:52.349 "dhgroup": "ffdhe8192" 00:20:52.349 } 00:20:52.349 } 00:20:52.349 ]' 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.349 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.610 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.610 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.610 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.610 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.610 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.870 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:52.870 07:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.441 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.701 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.702 07:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.962 00:20:53.962 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.962 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.962 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.223 { 00:20:54.223 "cntlid": 145, 00:20:54.223 "qid": 0, 00:20:54.223 "state": "enabled", 00:20:54.223 "thread": "nvmf_tgt_poll_group_000", 00:20:54.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:54.223 "listen_address": { 00:20:54.223 "trtype": "TCP", 00:20:54.223 "adrfam": "IPv4", 00:20:54.223 "traddr": "10.0.0.2", 00:20:54.223 "trsvcid": "4420" 00:20:54.223 }, 00:20:54.223 "peer_address": { 00:20:54.223 "trtype": "TCP", 00:20:54.223 "adrfam": "IPv4", 00:20:54.223 "traddr": "10.0.0.1", 00:20:54.223 "trsvcid": "35868" 00:20:54.223 }, 00:20:54.223 "auth": { 00:20:54.223 "state": "completed", 00:20:54.223 "digest": "sha512", 00:20:54.223 "dhgroup": "ffdhe8192" 00:20:54.223 } 00:20:54.223 } 00:20:54.223 ]' 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.223 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:54.484 07:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTAyYzRjZTc2MTU5MTVkOGYwZDI2NjM5NzgwYWFmM2NmMDQ4YjgwZjY0NTI2ZmNmh+FV4Q==: --dhchap-ctrl-secret DHHC-1:03:ZDEyOTAxYzY2Yjk3NjQzZjE5OWJmNDM0MmE5MmJhZjQ0ODZlNTk2NGRhZGI5MGYyYjMzM2M0ZjM1ZTI0MWYwM9SQZA0=: 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:55.428 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:55.689 request: 00:20:55.689 { 00:20:55.689 "name": "nvme0", 00:20:55.689 "trtype": "tcp", 00:20:55.689 "traddr": "10.0.0.2", 00:20:55.689 "adrfam": "ipv4", 00:20:55.689 "trsvcid": "4420", 00:20:55.689 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:55.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:55.689 "prchk_reftag": false, 00:20:55.689 "prchk_guard": false, 00:20:55.689 "hdgst": false, 00:20:55.689 "ddgst": false, 00:20:55.689 "dhchap_key": "key2", 00:20:55.689 "allow_unrecognized_csi": false, 00:20:55.689 "method": "bdev_nvme_attach_controller", 00:20:55.689 "req_id": 1 00:20:55.689 } 00:20:55.689 Got JSON-RPC error response 00:20:55.689 response: 00:20:55.689 { 00:20:55.689 "code": -5, 00:20:55.689 "message": "Input/output error" 00:20:55.689 } 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.689 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:55.690 07:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:56.261 request: 00:20:56.261 { 00:20:56.261 "name": "nvme0", 00:20:56.261 "trtype": "tcp", 00:20:56.261 "traddr": "10.0.0.2", 00:20:56.261 "adrfam": "ipv4", 00:20:56.261 "trsvcid": "4420", 00:20:56.261 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.261 "prchk_reftag": false, 00:20:56.261 "prchk_guard": false, 00:20:56.261 "hdgst": false, 00:20:56.261 "ddgst": false, 00:20:56.261 "dhchap_key": "key1", 00:20:56.261 "dhchap_ctrlr_key": "ckey2", 00:20:56.261 "allow_unrecognized_csi": false, 00:20:56.261 "method": "bdev_nvme_attach_controller", 00:20:56.261 "req_id": 1 00:20:56.261 } 00:20:56.261 Got JSON-RPC error response 00:20:56.261 response: 00:20:56.261 { 00:20:56.261 "code": -5, 00:20:56.261 "message": "Input/output error" 00:20:56.261 } 00:20:56.261 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:56.261 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.261 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.261 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.262 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.523 request: 00:20:56.523 { 00:20:56.523 "name": "nvme0", 00:20:56.523 "trtype": "tcp", 00:20:56.523 "traddr": "10.0.0.2", 00:20:56.523 "adrfam": "ipv4", 00:20:56.523 "trsvcid": "4420", 00:20:56.523 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:56.523 "prchk_reftag": false, 00:20:56.523 "prchk_guard": false, 00:20:56.523 "hdgst": false, 00:20:56.523 "ddgst": false, 00:20:56.523 "dhchap_key": "key1", 00:20:56.523 "dhchap_ctrlr_key": "ckey1", 00:20:56.523 "allow_unrecognized_csi": false, 00:20:56.523 "method": "bdev_nvme_attach_controller", 00:20:56.523 "req_id": 1 00:20:56.523 } 00:20:56.523 Got JSON-RPC error response 00:20:56.523 response: 00:20:56.523 { 00:20:56.523 "code": -5, 00:20:56.523 "message": "Input/output error" 00:20:56.523 } 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1430800 ']' 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1430800' 00:20:56.784 killing process with pid 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1430800 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1457082 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1457082 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1457082 ']' 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.784 07:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1457082 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1457082 ']' 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.726 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.987 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:57.987 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:57.987 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 null0 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aa6 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.GQI ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GQI 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7Yz 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.lHV ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lHV 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fhi 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.C26 ]] 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C26 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.987 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ak8 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.248 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.822 nvme0n1 00:20:58.822 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.822 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.822 07:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.083 { 00:20:59.083 "cntlid": 1, 00:20:59.083 "qid": 0, 00:20:59.083 "state": "enabled", 00:20:59.083 "thread": "nvmf_tgt_poll_group_000", 00:20:59.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.083 "listen_address": { 00:20:59.083 "trtype": "TCP", 00:20:59.083 "adrfam": "IPv4", 00:20:59.083 "traddr": "10.0.0.2", 00:20:59.083 "trsvcid": "4420" 00:20:59.083 }, 00:20:59.083 "peer_address": { 00:20:59.083 "trtype": "TCP", 00:20:59.083 "adrfam": "IPv4", 00:20:59.083 "traddr": "10.0.0.1", 00:20:59.083 "trsvcid": "48930" 00:20:59.083 }, 00:20:59.083 "auth": { 00:20:59.083 "state": "completed", 00:20:59.083 "digest": "sha512", 00:20:59.083 "dhgroup": "ffdhe8192" 00:20:59.083 } 00:20:59.083 } 00:20:59.083 ]' 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.083 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.343 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.343 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.343 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.343 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:59.343 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:20:59.915 07:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.177 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.439 request: 00:21:00.439 { 00:21:00.439 "name": "nvme0", 00:21:00.439 "trtype": "tcp", 00:21:00.439 "traddr": "10.0.0.2", 00:21:00.439 "adrfam": "ipv4", 00:21:00.439 "trsvcid": "4420", 00:21:00.439 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:00.439 "prchk_reftag": false, 00:21:00.439 "prchk_guard": false, 00:21:00.439 "hdgst": false, 00:21:00.439 "ddgst": false, 00:21:00.439 "dhchap_key": "key3", 00:21:00.439 "allow_unrecognized_csi": false, 00:21:00.439 "method": "bdev_nvme_attach_controller", 00:21:00.439 "req_id": 1 00:21:00.439 } 00:21:00.439 Got JSON-RPC error response 00:21:00.439 response: 00:21:00.439 { 00:21:00.439 "code": -5, 00:21:00.439 "message": "Input/output error" 00:21:00.439 } 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:00.439 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:00.779 request: 00:21:00.779 { 00:21:00.779 "name": "nvme0", 00:21:00.779 "trtype": "tcp", 00:21:00.779 "traddr": "10.0.0.2", 00:21:00.779 "adrfam": "ipv4", 00:21:00.779 "trsvcid": "4420", 00:21:00.779 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:00.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:00.779 "prchk_reftag": false, 00:21:00.779 "prchk_guard": false, 00:21:00.779 "hdgst": false, 00:21:00.779 "ddgst": false, 00:21:00.779 "dhchap_key": "key3", 00:21:00.779 "allow_unrecognized_csi": false, 00:21:00.779 "method": "bdev_nvme_attach_controller", 00:21:00.779 "req_id": 1 00:21:00.779 } 00:21:00.779 Got JSON-RPC error response 00:21:00.779 response: 00:21:00.779 { 00:21:00.779 "code": -5, 00:21:00.779 "message": "Input/output error" 00:21:00.779 } 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:00.779 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.059 07:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:01.363 request: 00:21:01.363 { 00:21:01.363 "name": "nvme0", 00:21:01.363 "trtype": "tcp", 00:21:01.363 "traddr": "10.0.0.2", 00:21:01.363 "adrfam": "ipv4", 00:21:01.363 "trsvcid": "4420", 00:21:01.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:01.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:01.363 "prchk_reftag": false, 00:21:01.363 "prchk_guard": false, 00:21:01.363 "hdgst": false, 00:21:01.363 "ddgst": false, 00:21:01.363 "dhchap_key": "key0", 00:21:01.363 "dhchap_ctrlr_key": "key1", 00:21:01.363 "allow_unrecognized_csi": false, 00:21:01.363 "method": "bdev_nvme_attach_controller", 00:21:01.363 "req_id": 1 00:21:01.363 } 00:21:01.363 Got JSON-RPC error response 00:21:01.363 response: 00:21:01.363 { 00:21:01.363 "code": -5, 00:21:01.363 "message": "Input/output error" 00:21:01.363 } 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:01.363 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:01.364 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:01.625 nvme0n1 00:21:01.625 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:01.625 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:01.625 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:01.886 07:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:02.828 nvme0n1 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:02.828 07:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.088 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.088 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:21:03.088 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: --dhchap-ctrl-secret DHHC-1:03:OGM0YmMxMDFiZmE1N2EwMDk5ODk3MmUxNjkzM2ZjZWY3NWRlMzE2OGZmMThlZjRhMDI0MzM1YWVmYzAxZmIxMpuL1zg=: 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.658 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:03.919 07:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:04.490 request: 00:21:04.490 { 00:21:04.490 "name": "nvme0", 00:21:04.490 "trtype": "tcp", 00:21:04.490 "traddr": "10.0.0.2", 00:21:04.490 "adrfam": "ipv4", 00:21:04.490 "trsvcid": "4420", 00:21:04.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:04.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.490 "prchk_reftag": false, 00:21:04.490 "prchk_guard": false, 00:21:04.490 "hdgst": false, 00:21:04.490 "ddgst": false, 00:21:04.490 "dhchap_key": "key1", 00:21:04.490 "allow_unrecognized_csi": false, 00:21:04.490 "method": "bdev_nvme_attach_controller", 00:21:04.490 "req_id": 1 00:21:04.490 } 00:21:04.490 Got JSON-RPC error response 00:21:04.490 response: 00:21:04.490 { 00:21:04.490 "code": -5, 00:21:04.490 "message": "Input/output error" 00:21:04.490 } 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:04.490 07:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:05.061 nvme0n1 00:21:05.061 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:05.061 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:05.061 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.321 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.321 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.321 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:05.581 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:05.581 nvme0n1 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.840 07:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: '' 2s 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: ]] 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjA2MTk4YTkxOGVhODMxZTFiMDIxZjQyNTI3ZTIwYzZHvzsW: 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:06.101 07:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: 2s 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: ]] 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MmExMzMzYmQyODIyY2FjNGI3NjZmMjRhODA4NjdhNjcyMjk1NDkwODQ4Y2M3MTRhcw9xpA==: 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:08.016 07:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:10.563 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:10.824 nvme0n1 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:11.085 07:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:11.345 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:11.345 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:11.345 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:11.605 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:11.866 07:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:12.439 request: 00:21:12.439 { 00:21:12.439 "name": "nvme0", 00:21:12.439 "dhchap_key": "key1", 00:21:12.439 "dhchap_ctrlr_key": "key3", 00:21:12.439 "method": "bdev_nvme_set_keys", 00:21:12.439 "req_id": 1 00:21:12.439 } 00:21:12.439 Got JSON-RPC error response 00:21:12.439 response: 00:21:12.439 { 00:21:12.439 "code": -13, 00:21:12.439 "message": "Permission denied" 00:21:12.439 } 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:12.439 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.699 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:12.699 07:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:13.641 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:13.641 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:13.641 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:13.902 07:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:14.475 nvme0n1 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:14.475 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:15.047 request: 00:21:15.047 { 00:21:15.047 "name": "nvme0", 00:21:15.047 "dhchap_key": "key2", 00:21:15.047 "dhchap_ctrlr_key": "key0", 00:21:15.047 "method": "bdev_nvme_set_keys", 00:21:15.047 "req_id": 1 00:21:15.047 } 00:21:15.047 Got JSON-RPC error response 00:21:15.047 response: 00:21:15.047 { 00:21:15.047 "code": -13, 00:21:15.047 "message": "Permission denied" 00:21:15.047 } 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:15.047 07:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.307 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:15.307 07:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:16.251 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1431031 ']' 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431031' 00:21:16.512 killing process with pid 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1431031 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.512 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.773 rmmod nvme_tcp 00:21:16.773 rmmod nvme_fabrics 00:21:16.773 rmmod nvme_keyring 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1457082 ']' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1457082 ']' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1457082' 00:21:16.773 killing process with pid 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1457082 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.773 07:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aa6 /tmp/spdk.key-sha256.7Yz /tmp/spdk.key-sha384.Fhi /tmp/spdk.key-sha512.ak8 /tmp/spdk.key-sha512.GQI /tmp/spdk.key-sha384.lHV /tmp/spdk.key-sha256.C26 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:19.336 00:21:19.336 real 2m36.814s 00:21:19.336 user 5m52.873s 00:21:19.336 sys 0m24.661s 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.336 ************************************ 00:21:19.336 END TEST nvmf_auth_target 00:21:19.336 ************************************ 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.336 ************************************ 00:21:19.336 START TEST nvmf_bdevio_no_huge 00:21:19.336 ************************************ 00:21:19.336 07:30:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:19.336 * Looking for test storage... 00:21:19.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.336 --rc genhtml_branch_coverage=1 00:21:19.336 --rc genhtml_function_coverage=1 00:21:19.336 --rc genhtml_legend=1 00:21:19.336 --rc geninfo_all_blocks=1 00:21:19.336 --rc geninfo_unexecuted_blocks=1 00:21:19.336 00:21:19.336 ' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.336 --rc genhtml_branch_coverage=1 00:21:19.336 --rc genhtml_function_coverage=1 00:21:19.336 --rc genhtml_legend=1 00:21:19.336 --rc geninfo_all_blocks=1 00:21:19.336 --rc geninfo_unexecuted_blocks=1 00:21:19.336 00:21:19.336 ' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.336 --rc genhtml_branch_coverage=1 00:21:19.336 --rc genhtml_function_coverage=1 00:21:19.336 --rc genhtml_legend=1 00:21:19.336 --rc geninfo_all_blocks=1 00:21:19.336 --rc geninfo_unexecuted_blocks=1 00:21:19.336 00:21:19.336 ' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.336 --rc genhtml_branch_coverage=1 00:21:19.336 --rc genhtml_function_coverage=1 00:21:19.336 --rc genhtml_legend=1 00:21:19.336 --rc geninfo_all_blocks=1 00:21:19.336 --rc geninfo_unexecuted_blocks=1 00:21:19.336 00:21:19.336 ' 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.336 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.337 07:30:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.483 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.483 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:27.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:27.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:27.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:27.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.484 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:21:27.485 00:21:27.485 --- 10.0.0.2 ping statistics --- 00:21:27.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.485 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:27.485 00:21:27.485 --- 10.0.0.1 ping statistics --- 00:21:27.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.485 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1465245 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1465245 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1465245 ']' 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.485 07:30:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.485 [2024-11-26 07:30:54.794609] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:21:27.485 [2024-11-26 07:30:54.794688] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:27.485 [2024-11-26 07:30:54.901986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.485 [2024-11-26 07:30:54.962451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.485 [2024-11-26 07:30:54.962496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.485 [2024-11-26 07:30:54.962505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.485 [2024-11-26 07:30:54.962512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.485 [2024-11-26 07:30:54.962519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.485 [2024-11-26 07:30:54.964300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:27.485 [2024-11-26 07:30:54.964522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:27.485 [2024-11-26 07:30:54.964645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:27.485 [2024-11-26 07:30:54.964648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 [2024-11-26 07:30:55.678614] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 Malloc0 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:27.746 [2024-11-26 07:30:55.732578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:27.746 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:27.746 { 00:21:27.746 "params": { 00:21:27.746 "name": "Nvme$subsystem", 00:21:27.746 "trtype": "$TEST_TRANSPORT", 00:21:27.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.746 "adrfam": "ipv4", 00:21:27.746 "trsvcid": "$NVMF_PORT", 00:21:27.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.747 "hdgst": ${hdgst:-false}, 00:21:27.747 "ddgst": ${ddgst:-false} 00:21:27.747 }, 00:21:27.747 "method": "bdev_nvme_attach_controller" 00:21:27.747 } 00:21:27.747 EOF 00:21:27.747 )") 00:21:27.747 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:27.747 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:27.747 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:27.747 07:30:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:27.747 "params": { 00:21:27.747 "name": "Nvme1", 00:21:27.747 "trtype": "tcp", 00:21:27.747 "traddr": "10.0.0.2", 00:21:27.747 "adrfam": "ipv4", 00:21:27.747 "trsvcid": "4420", 00:21:27.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.747 "hdgst": false, 00:21:27.747 "ddgst": false 00:21:27.747 }, 00:21:27.747 "method": "bdev_nvme_attach_controller" 00:21:27.747 }' 00:21:27.747 [2024-11-26 07:30:55.791552] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:21:27.747 [2024-11-26 07:30:55.791621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1465545 ] 00:21:28.007 [2024-11-26 07:30:55.889447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.007 [2024-11-26 07:30:55.952001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.007 [2024-11-26 07:30:55.952172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.007 [2024-11-26 07:30:55.952187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.268 I/O targets: 00:21:28.268 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:28.268 00:21:28.268 00:21:28.268 CUnit - A unit testing framework for C - Version 2.1-3 00:21:28.268 http://cunit.sourceforge.net/ 00:21:28.268 00:21:28.268 00:21:28.268 Suite: bdevio tests on: Nvme1n1 00:21:28.268 Test: blockdev write read block ...passed 00:21:28.268 Test: blockdev write zeroes read block ...passed 00:21:28.268 Test: blockdev write zeroes read no split ...passed 00:21:28.268 Test: blockdev write zeroes read split ...passed 00:21:28.268 Test: blockdev write zeroes read split partial ...passed 00:21:28.268 Test: blockdev reset ...[2024-11-26 07:30:56.278365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:28.268 [2024-11-26 07:30:56.278462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186c800 (9): Bad file descriptor 00:21:28.268 [2024-11-26 07:30:56.291433] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:28.268 passed 00:21:28.268 Test: blockdev write read 8 blocks ...passed 00:21:28.268 Test: blockdev write read size > 128k ...passed 00:21:28.268 Test: blockdev write read invalid size ...passed 00:21:28.268 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:28.268 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:28.268 Test: blockdev write read max offset ...passed 00:21:28.530 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:28.530 Test: blockdev writev readv 8 blocks ...passed 00:21:28.530 Test: blockdev writev readv 30 x 1block ...passed 00:21:28.530 Test: blockdev writev readv block ...passed 00:21:28.530 Test: blockdev writev readv size > 128k ...passed 00:21:28.530 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:28.530 Test: blockdev comparev and writev ...[2024-11-26 07:30:56.552853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.552902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.552919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.552938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:28.530 [2024-11-26 07:30:56.553954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:28.530 [2024-11-26 07:30:56.553963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:28.530 passed 00:21:28.790 Test: blockdev nvme passthru rw ...passed 00:21:28.790 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:30:56.638592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.790 [2024-11-26 07:30:56.638607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:28.790 [2024-11-26 07:30:56.638826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.790 [2024-11-26 07:30:56.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:28.791 [2024-11-26 07:30:56.638951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.791 [2024-11-26 07:30:56.638961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:28.791 [2024-11-26 07:30:56.639085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:28.791 [2024-11-26 07:30:56.639095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:28.791 passed 00:21:28.791 Test: blockdev nvme admin passthru ...passed 00:21:28.791 Test: blockdev copy ...passed 00:21:28.791 00:21:28.791 Run Summary: Type Total Ran Passed Failed Inactive 00:21:28.791 suites 1 1 n/a 0 0 00:21:28.791 tests 23 23 23 0 0 00:21:28.791 asserts 152 152 152 0 n/a 00:21:28.791 00:21:28.791 Elapsed time = 1.126 seconds 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.052 07:30:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.052 rmmod nvme_tcp 00:21:29.052 rmmod nvme_fabrics 00:21:29.052 rmmod nvme_keyring 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1465245 ']' 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1465245 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1465245 ']' 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1465245 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1465245 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1465245' 00:21:29.052 killing process with pid 1465245 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1465245 00:21:29.052 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1465245 00:21:29.313 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.314 07:30:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:31.872 00:21:31.872 real 0m12.405s 00:21:31.872 user 0m13.433s 00:21:31.872 sys 0m6.684s 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:31.872 ************************************ 00:21:31.872 END TEST nvmf_bdevio_no_huge 00:21:31.872 ************************************ 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:31.872 ************************************ 00:21:31.872 START TEST nvmf_tls 00:21:31.872 ************************************ 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:31.872 * Looking for test storage... 00:21:31.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:31.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.872 --rc genhtml_branch_coverage=1 00:21:31.872 --rc genhtml_function_coverage=1 00:21:31.872 --rc genhtml_legend=1 00:21:31.872 --rc geninfo_all_blocks=1 00:21:31.872 --rc geninfo_unexecuted_blocks=1 00:21:31.872 00:21:31.872 ' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:31.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.872 --rc genhtml_branch_coverage=1 00:21:31.872 --rc genhtml_function_coverage=1 00:21:31.872 --rc genhtml_legend=1 00:21:31.872 --rc geninfo_all_blocks=1 00:21:31.872 --rc geninfo_unexecuted_blocks=1 00:21:31.872 00:21:31.872 ' 00:21:31.872 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:31.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.872 --rc genhtml_branch_coverage=1 00:21:31.872 --rc genhtml_function_coverage=1 00:21:31.873 --rc genhtml_legend=1 00:21:31.873 --rc geninfo_all_blocks=1 00:21:31.873 --rc geninfo_unexecuted_blocks=1 00:21:31.873 00:21:31.873 ' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:31.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.873 --rc genhtml_branch_coverage=1 00:21:31.873 --rc genhtml_function_coverage=1 00:21:31.873 --rc genhtml_legend=1 00:21:31.873 --rc geninfo_all_blocks=1 00:21:31.873 --rc geninfo_unexecuted_blocks=1 00:21:31.873 00:21:31.873 ' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:31.873 07:30:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.020 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.021 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.021 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.021 07:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:21:40.021 00:21:40.021 --- 10.0.0.2 ping statistics --- 00:21:40.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.021 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:21:40.021 00:21:40.021 --- 10.0.0.1 ping statistics --- 00:21:40.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.021 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1469938 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1469938 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1469938 ']' 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.021 07:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.021 [2024-11-26 07:31:07.354833] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:21:40.021 [2024-11-26 07:31:07.354901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.021 [2024-11-26 07:31:07.439136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.021 [2024-11-26 07:31:07.490628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.021 [2024-11-26 07:31:07.490676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.021 [2024-11-26 07:31:07.490687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.022 [2024-11-26 07:31:07.490698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.022 [2024-11-26 07:31:07.490707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.022 [2024-11-26 07:31:07.491464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:40.283 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:40.545 true 00:21:40.545 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.545 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:40.545 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:40.545 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:40.545 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:40.806 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:40.806 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:41.068 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:41.068 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:41.068 07:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:41.068 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:41.068 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:41.329 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:41.330 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:41.330 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:41.330 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:41.592 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:41.592 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:41.592 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:41.592 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:41.592 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:41.853 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:41.853 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:41.853 07:31:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:42.114 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:42.114 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hUgClmjeL0 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.332M6t1Dom 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hUgClmjeL0 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.332M6t1Dom 00:21:42.377 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:42.638 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:42.899 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hUgClmjeL0 00:21:42.899 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUgClmjeL0 00:21:42.899 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:42.899 [2024-11-26 07:31:10.953866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.899 07:31:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:43.160 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:43.420 [2024-11-26 07:31:11.274638] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.420 [2024-11-26 07:31:11.274853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.420 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:43.420 malloc0 00:21:43.420 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:43.681 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUgClmjeL0 00:21:43.942 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:43.942 07:31:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hUgClmjeL0 00:21:56.175 Initializing NVMe Controllers 00:21:56.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:56.175 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:56.175 Initialization complete. Launching workers. 00:21:56.175 ======================================================== 00:21:56.175 Latency(us) 00:21:56.175 Device Information : IOPS MiB/s Average min max 00:21:56.175 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18833.48 73.57 3398.43 1049.24 4536.48 00:21:56.175 ======================================================== 00:21:56.175 Total : 18833.48 73.57 3398.43 1049.24 4536.48 00:21:56.175 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUgClmjeL0 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUgClmjeL0 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1472950 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1472950 /var/tmp/bdevperf.sock 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1472950 ']' 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.175 [2024-11-26 07:31:22.128657] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:21:56.175 [2024-11-26 07:31:22.128714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472950 ] 00:21:56.175 [2024-11-26 07:31:22.214710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.175 [2024-11-26 07:31:22.250000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:56.175 07:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUgClmjeL0 00:21:56.175 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:56.175 [2024-11-26 07:31:23.249501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.175 TLSTESTn1 00:21:56.175 07:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:56.175 Running I/O for 10 seconds... 00:21:57.375 4842.00 IOPS, 18.91 MiB/s [2024-11-26T06:31:26.857Z] 4357.00 IOPS, 17.02 MiB/s [2024-11-26T06:31:27.797Z] 4199.00 IOPS, 16.40 MiB/s [2024-11-26T06:31:28.743Z] 4436.25 IOPS, 17.33 MiB/s [2024-11-26T06:31:29.683Z] 4796.60 IOPS, 18.74 MiB/s [2024-11-26T06:31:30.651Z] 4944.67 IOPS, 19.32 MiB/s [2024-11-26T06:31:31.662Z] 5073.29 IOPS, 19.82 MiB/s [2024-11-26T06:31:32.605Z] 5151.38 IOPS, 20.12 MiB/s [2024-11-26T06:31:33.546Z] 5258.33 IOPS, 20.54 MiB/s [2024-11-26T06:31:33.546Z] 5226.00 IOPS, 20.41 MiB/s 00:22:05.449 Latency(us) 00:22:05.449 [2024-11-26T06:31:33.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.449 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.449 Verification LBA range: start 0x0 length 0x2000 00:22:05.449 TLSTESTn1 : 10.01 5231.81 20.44 0.00 0.00 24430.12 5133.65 39103.15 00:22:05.449 [2024-11-26T06:31:33.547Z] =================================================================================================================== 00:22:05.449 [2024-11-26T06:31:33.547Z] Total : 5231.81 20.44 0.00 0.00 24430.12 5133.65 39103.15 00:22:05.449 { 00:22:05.449 "results": [ 00:22:05.449 { 00:22:05.449 "job": "TLSTESTn1", 00:22:05.449 "core_mask": "0x4", 00:22:05.449 "workload": "verify", 00:22:05.449 "status": "finished", 00:22:05.449 "verify_range": { 00:22:05.449 "start": 0, 00:22:05.449 "length": 8192 00:22:05.449 }, 00:22:05.449 "queue_depth": 128, 00:22:05.449 "io_size": 4096, 00:22:05.449 "runtime": 10.013168, 00:22:05.449 "iops": 5231.81075160229, 00:22:05.449 "mibps": 20.436760748446446, 00:22:05.449 "io_failed": 0, 00:22:05.449 "io_timeout": 0, 00:22:05.449 "avg_latency_us": 24430.123540573044, 00:22:05.449 "min_latency_us": 5133.653333333334, 00:22:05.449 "max_latency_us": 39103.14666666667 00:22:05.449 } 00:22:05.449 ], 00:22:05.449 "core_count": 1 00:22:05.449 } 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1472950 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1472950 ']' 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1472950 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.449 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1472950 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1472950' 00:22:05.710 killing process with pid 1472950 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1472950 00:22:05.710 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.710 00:22:05.710 Latency(us) 00:22:05.710 [2024-11-26T06:31:33.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.710 [2024-11-26T06:31:33.808Z] =================================================================================================================== 00:22:05.710 [2024-11-26T06:31:33.808Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1472950 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.332M6t1Dom 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.332M6t1Dom 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.332M6t1Dom 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.332M6t1Dom 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1475097 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1475097 /var/tmp/bdevperf.sock 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1475097 ']' 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.710 07:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.710 [2024-11-26 07:31:33.715402] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:05.710 [2024-11-26 07:31:33.715460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475097 ] 00:22:05.710 [2024-11-26 07:31:33.799588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.970 [2024-11-26 07:31:33.828587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.541 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.541 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:06.541 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.332M6t1Dom 00:22:06.801 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:06.801 [2024-11-26 07:31:34.823366] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.801 [2024-11-26 07:31:34.830200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.801 [2024-11-26 07:31:34.830558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579bd0 (107): Transport endpoint is not connected 00:22:06.801 [2024-11-26 07:31:34.831553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x579bd0 (9): Bad file descriptor 00:22:06.801 [2024-11-26 07:31:34.832555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:06.801 [2024-11-26 07:31:34.832562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:06.801 [2024-11-26 07:31:34.832569] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:06.801 [2024-11-26 07:31:34.832576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:06.801 request: 00:22:06.801 { 00:22:06.801 "name": "TLSTEST", 00:22:06.801 "trtype": "tcp", 00:22:06.801 "traddr": "10.0.0.2", 00:22:06.801 "adrfam": "ipv4", 00:22:06.801 "trsvcid": "4420", 00:22:06.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.801 "prchk_reftag": false, 00:22:06.801 "prchk_guard": false, 00:22:06.801 "hdgst": false, 00:22:06.801 "ddgst": false, 00:22:06.802 "psk": "key0", 00:22:06.802 "allow_unrecognized_csi": false, 00:22:06.802 "method": "bdev_nvme_attach_controller", 00:22:06.802 "req_id": 1 00:22:06.802 } 00:22:06.802 Got JSON-RPC error response 00:22:06.802 response: 00:22:06.802 { 00:22:06.802 "code": -5, 00:22:06.802 "message": "Input/output error" 00:22:06.802 } 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1475097 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1475097 ']' 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1475097 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.802 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475097 00:22:07.062 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:07.062 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:07.062 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475097' 00:22:07.062 killing process with pid 1475097 00:22:07.062 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1475097 00:22:07.062 Received shutdown signal, test time was about 10.000000 seconds 00:22:07.062 00:22:07.062 Latency(us) 00:22:07.062 [2024-11-26T06:31:35.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.062 [2024-11-26T06:31:35.160Z] =================================================================================================================== 00:22:07.062 [2024-11-26T06:31:35.160Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:07.062 07:31:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1475097 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUgClmjeL0 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUgClmjeL0 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hUgClmjeL0 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:07.062 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUgClmjeL0 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1475326 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1475326 /var/tmp/bdevperf.sock 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1475326 ']' 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.063 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.063 [2024-11-26 07:31:35.076685] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:07.063 [2024-11-26 07:31:35.076744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475326 ] 00:22:07.324 [2024-11-26 07:31:35.161368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.324 [2024-11-26 07:31:35.190037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.895 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.895 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:07.895 07:31:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUgClmjeL0 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:22:08.156 [2024-11-26 07:31:36.196541] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.156 [2024-11-26 07:31:36.203447] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:08.156 [2024-11-26 07:31:36.203471] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:08.156 [2024-11-26 07:31:36.203490] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:08.156 [2024-11-26 07:31:36.203641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdbd0 (107): Transport endpoint is not connected 00:22:08.156 [2024-11-26 07:31:36.204631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecdbd0 (9): Bad file descriptor 00:22:08.156 [2024-11-26 07:31:36.205633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:08.156 [2024-11-26 07:31:36.205640] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:08.156 [2024-11-26 07:31:36.205646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:08.156 [2024-11-26 07:31:36.205654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:08.156 request: 00:22:08.156 { 00:22:08.156 "name": "TLSTEST", 00:22:08.156 "trtype": "tcp", 00:22:08.156 "traddr": "10.0.0.2", 00:22:08.156 "adrfam": "ipv4", 00:22:08.156 "trsvcid": "4420", 00:22:08.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.156 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:08.156 "prchk_reftag": false, 00:22:08.156 "prchk_guard": false, 00:22:08.156 "hdgst": false, 00:22:08.156 "ddgst": false, 00:22:08.156 "psk": "key0", 00:22:08.156 "allow_unrecognized_csi": false, 00:22:08.156 "method": "bdev_nvme_attach_controller", 00:22:08.156 "req_id": 1 00:22:08.156 } 00:22:08.156 Got JSON-RPC error response 00:22:08.156 response: 00:22:08.156 { 00:22:08.156 "code": -5, 00:22:08.156 "message": "Input/output error" 00:22:08.156 } 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1475326 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1475326 ']' 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1475326 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.156 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475326 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475326' 00:22:08.418 killing process with pid 1475326 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1475326 00:22:08.418 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.418 00:22:08.418 Latency(us) 00:22:08.418 [2024-11-26T06:31:36.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.418 [2024-11-26T06:31:36.516Z] =================================================================================================================== 00:22:08.418 [2024-11-26T06:31:36.516Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1475326 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUgClmjeL0 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUgClmjeL0 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.418 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUgClmjeL0 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUgClmjeL0 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1475668 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1475668 /var/tmp/bdevperf.sock 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1475668 ']' 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.419 07:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.419 [2024-11-26 07:31:36.454569] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:08.419 [2024-11-26 07:31:36.454626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475668 ] 00:22:08.679 [2024-11-26 07:31:36.539365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.679 [2024-11-26 07:31:36.567328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.250 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.250 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:09.250 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUgClmjeL0 00:22:09.511 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.511 [2024-11-26 07:31:37.597883] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.772 [2024-11-26 07:31:37.606852] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:09.772 [2024-11-26 07:31:37.606871] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:09.772 [2024-11-26 07:31:37.606888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:09.772 [2024-11-26 07:31:37.607065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1246bd0 (107): Transport endpoint is not connected 00:22:09.772 [2024-11-26 07:31:37.608060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1246bd0 (9): Bad file descriptor 00:22:09.772 [2024-11-26 07:31:37.609063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:22:09.772 [2024-11-26 07:31:37.609073] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:09.772 [2024-11-26 07:31:37.609080] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:22:09.772 [2024-11-26 07:31:37.609088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:22:09.772 request: 00:22:09.772 { 00:22:09.772 "name": "TLSTEST", 00:22:09.772 "trtype": "tcp", 00:22:09.772 "traddr": "10.0.0.2", 00:22:09.772 "adrfam": "ipv4", 00:22:09.772 "trsvcid": "4420", 00:22:09.772 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.772 "prchk_reftag": false, 00:22:09.772 "prchk_guard": false, 00:22:09.772 "hdgst": false, 00:22:09.773 "ddgst": false, 00:22:09.773 "psk": "key0", 00:22:09.773 "allow_unrecognized_csi": false, 00:22:09.773 "method": "bdev_nvme_attach_controller", 00:22:09.773 "req_id": 1 00:22:09.773 } 00:22:09.773 Got JSON-RPC error response 00:22:09.773 response: 00:22:09.773 { 00:22:09.773 "code": -5, 00:22:09.773 "message": "Input/output error" 00:22:09.773 } 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1475668 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1475668 ']' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1475668 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1475668 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1475668' 00:22:09.773 killing process with pid 1475668 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1475668 00:22:09.773 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.773 00:22:09.773 Latency(us) 00:22:09.773 [2024-11-26T06:31:37.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.773 [2024-11-26T06:31:37.871Z] =================================================================================================================== 00:22:09.773 [2024-11-26T06:31:37.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1475668 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1476003 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1476003 /var/tmp/bdevperf.sock 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1476003 ']' 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.773 07:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.773 [2024-11-26 07:31:37.848187] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:09.773 [2024-11-26 07:31:37.848244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476003 ] 00:22:10.034 [2024-11-26 07:31:37.931903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.034 [2024-11-26 07:31:37.960177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.606 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.606 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:10.606 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:22:10.867 [2024-11-26 07:31:38.794277] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:22:10.867 [2024-11-26 07:31:38.794306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:10.867 request: 00:22:10.867 { 00:22:10.867 "name": "key0", 00:22:10.867 "path": "", 00:22:10.867 "method": "keyring_file_add_key", 00:22:10.867 "req_id": 1 00:22:10.867 } 00:22:10.867 Got JSON-RPC error response 00:22:10.867 response: 00:22:10.867 { 00:22:10.867 "code": -1, 00:22:10.867 "message": "Operation not permitted" 00:22:10.867 } 00:22:10.867 07:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:11.129 [2024-11-26 07:31:38.978826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.129 [2024-11-26 07:31:38.978847] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:11.129 request: 00:22:11.129 { 00:22:11.129 "name": "TLSTEST", 00:22:11.129 "trtype": "tcp", 00:22:11.129 "traddr": "10.0.0.2", 00:22:11.129 "adrfam": "ipv4", 00:22:11.129 "trsvcid": "4420", 00:22:11.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.129 "prchk_reftag": false, 00:22:11.129 "prchk_guard": false, 00:22:11.129 "hdgst": false, 00:22:11.129 "ddgst": false, 00:22:11.129 "psk": "key0", 00:22:11.129 "allow_unrecognized_csi": false, 00:22:11.129 "method": "bdev_nvme_attach_controller", 00:22:11.129 "req_id": 1 00:22:11.129 } 00:22:11.129 Got JSON-RPC error response 00:22:11.129 response: 00:22:11.129 { 00:22:11.129 "code": -126, 00:22:11.129 "message": "Required key not available" 00:22:11.129 } 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1476003 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1476003 ']' 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1476003 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476003 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476003' 00:22:11.129 killing process with pid 1476003 00:22:11.129 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1476003 00:22:11.129 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.129 00:22:11.129 Latency(us) 00:22:11.129 [2024-11-26T06:31:39.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.129 [2024-11-26T06:31:39.227Z] =================================================================================================================== 00:22:11.129 [2024-11-26T06:31:39.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1476003 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1469938 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1469938 ']' 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1469938 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.130 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1469938 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1469938' 00:22:11.392 killing process with pid 1469938 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1469938 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1469938 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.U5eqNhvrHB 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.U5eqNhvrHB 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1476362 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1476362 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1476362 ']' 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.392 07:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.392 [2024-11-26 07:31:39.455093] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:11.392 [2024-11-26 07:31:39.455152] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.654 [2024-11-26 07:31:39.545981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.654 [2024-11-26 07:31:39.580875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.654 [2024-11-26 07:31:39.580919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.654 [2024-11-26 07:31:39.580925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.654 [2024-11-26 07:31:39.580929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.654 [2024-11-26 07:31:39.580934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.654 [2024-11-26 07:31:39.581483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U5eqNhvrHB 00:22:12.226 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:12.487 [2024-11-26 07:31:40.464482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.487 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:12.747 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:12.747 [2024-11-26 07:31:40.833386] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.747 [2024-11-26 07:31:40.833587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.008 07:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:13.008 malloc0 00:22:13.008 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:13.269 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U5eqNhvrHB 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.U5eqNhvrHB 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1476727 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1476727 /var/tmp/bdevperf.sock 00:22:13.530 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1476727 ']' 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.531 07:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.792 [2024-11-26 07:31:41.641356] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:13.792 [2024-11-26 07:31:41.641410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476727 ] 00:22:13.792 [2024-11-26 07:31:41.725039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.792 [2024-11-26 07:31:41.754024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.362 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.362 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:14.362 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:14.622 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.884 [2024-11-26 07:31:42.760567] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.884 TLSTESTn1 00:22:14.884 07:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:14.884 Running I/O for 10 seconds... 00:22:17.208 6436.00 IOPS, 25.14 MiB/s [2024-11-26T06:31:46.245Z] 6491.50 IOPS, 25.36 MiB/s [2024-11-26T06:31:47.186Z] 6491.67 IOPS, 25.36 MiB/s [2024-11-26T06:31:48.128Z] 6467.00 IOPS, 25.26 MiB/s [2024-11-26T06:31:49.071Z] 6398.80 IOPS, 25.00 MiB/s [2024-11-26T06:31:50.012Z] 6415.67 IOPS, 25.06 MiB/s [2024-11-26T06:31:51.394Z] 6432.57 IOPS, 25.13 MiB/s [2024-11-26T06:31:52.336Z] 6441.25 IOPS, 25.16 MiB/s [2024-11-26T06:31:53.278Z] 6438.44 IOPS, 25.15 MiB/s [2024-11-26T06:31:53.278Z] 6454.10 IOPS, 25.21 MiB/s 00:22:25.180 Latency(us) 00:22:25.180 [2024-11-26T06:31:53.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.180 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.180 Verification LBA range: start 0x0 length 0x2000 00:22:25.181 TLSTESTn1 : 10.01 6458.07 25.23 0.00 0.00 19790.24 5242.88 23920.64 00:22:25.181 [2024-11-26T06:31:53.279Z] =================================================================================================================== 00:22:25.181 [2024-11-26T06:31:53.279Z] Total : 6458.07 25.23 0.00 0.00 19790.24 5242.88 23920.64 00:22:25.181 { 00:22:25.181 "results": [ 00:22:25.181 { 00:22:25.181 "job": "TLSTESTn1", 00:22:25.181 "core_mask": "0x4", 00:22:25.181 "workload": "verify", 00:22:25.181 "status": "finished", 00:22:25.181 "verify_range": { 00:22:25.181 "start": 0, 00:22:25.181 "length": 8192 00:22:25.181 }, 00:22:25.181 "queue_depth": 128, 00:22:25.181 "io_size": 4096, 00:22:25.181 "runtime": 10.013515, 00:22:25.181 "iops": 6458.071915805788, 00:22:25.181 "mibps": 25.22684342111636, 00:22:25.181 "io_failed": 0, 00:22:25.181 "io_timeout": 0, 00:22:25.181 "avg_latency_us": 19790.23684295169, 00:22:25.181 "min_latency_us": 5242.88, 00:22:25.181 "max_latency_us": 23920.64 00:22:25.181 } 00:22:25.181 ], 00:22:25.181 "core_count": 1 00:22:25.181 } 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1476727 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1476727 ']' 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1476727 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476727 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476727' 00:22:25.181 killing process with pid 1476727 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1476727 00:22:25.181 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.181 00:22:25.181 Latency(us) 00:22:25.181 [2024-11-26T06:31:53.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.181 [2024-11-26T06:31:53.279Z] =================================================================================================================== 00:22:25.181 [2024-11-26T06:31:53.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1476727 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.U5eqNhvrHB 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U5eqNhvrHB 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U5eqNhvrHB 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U5eqNhvrHB 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.U5eqNhvrHB 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1479067 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1479067 /var/tmp/bdevperf.sock 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479067 ']' 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.181 07:31:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.181 [2024-11-26 07:31:53.240990] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:25.181 [2024-11-26 07:31:53.241047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1479067 ] 00:22:25.441 [2024-11-26 07:31:53.325760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.441 [2024-11-26 07:31:53.353774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.013 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.013 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:26.013 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:26.274 [2024-11-26 07:31:54.187847] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U5eqNhvrHB': 0100666 00:22:26.274 [2024-11-26 07:31:54.187875] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:26.274 request: 00:22:26.274 { 00:22:26.274 "name": "key0", 00:22:26.274 "path": "/tmp/tmp.U5eqNhvrHB", 00:22:26.274 "method": "keyring_file_add_key", 00:22:26.274 "req_id": 1 00:22:26.274 } 00:22:26.274 Got JSON-RPC error response 00:22:26.274 response: 00:22:26.274 { 00:22:26.274 "code": -1, 00:22:26.274 "message": "Operation not permitted" 00:22:26.274 } 00:22:26.274 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.274 [2024-11-26 07:31:54.364363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.274 [2024-11-26 07:31:54.364386] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:26.535 request: 00:22:26.535 { 00:22:26.535 "name": "TLSTEST", 00:22:26.535 "trtype": "tcp", 00:22:26.535 "traddr": "10.0.0.2", 00:22:26.535 "adrfam": "ipv4", 00:22:26.535 "trsvcid": "4420", 00:22:26.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.535 "prchk_reftag": false, 00:22:26.535 "prchk_guard": false, 00:22:26.535 "hdgst": false, 00:22:26.535 "ddgst": false, 00:22:26.535 "psk": "key0", 00:22:26.535 "allow_unrecognized_csi": false, 00:22:26.535 "method": "bdev_nvme_attach_controller", 00:22:26.535 "req_id": 1 00:22:26.535 } 00:22:26.535 Got JSON-RPC error response 00:22:26.535 response: 00:22:26.535 { 00:22:26.535 "code": -126, 00:22:26.535 "message": "Required key not available" 00:22:26.535 } 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1479067 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479067 ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479067 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479067 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479067' 00:22:26.535 killing process with pid 1479067 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479067 00:22:26.535 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.535 00:22:26.535 Latency(us) 00:22:26.535 [2024-11-26T06:31:54.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.535 [2024-11-26T06:31:54.633Z] =================================================================================================================== 00:22:26.535 [2024-11-26T06:31:54.633Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479067 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1476362 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1476362 ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1476362 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476362 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476362' 00:22:26.535 killing process with pid 1476362 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1476362 00:22:26.535 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1476362 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1479336 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1479336 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479336 ']' 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.796 07:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.796 [2024-11-26 07:31:54.792289] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:26.796 [2024-11-26 07:31:54.792344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.796 [2024-11-26 07:31:54.884258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.057 [2024-11-26 07:31:54.917308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.057 [2024-11-26 07:31:54.917341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.057 [2024-11-26 07:31:54.917347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.057 [2024-11-26 07:31:54.917352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.057 [2024-11-26 07:31:54.917356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.057 [2024-11-26 07:31:54.917857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U5eqNhvrHB 00:22:27.628 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.889 [2024-11-26 07:31:55.791769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.889 07:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:28.150 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:28.150 [2024-11-26 07:31:56.148650] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.150 [2024-11-26 07:31:56.148857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.150 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.410 malloc0 00:22:28.410 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:28.672 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:28.672 [2024-11-26 07:31:56.703817] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.U5eqNhvrHB': 0100666 00:22:28.672 [2024-11-26 07:31:56.703837] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:28.672 request: 00:22:28.672 { 00:22:28.672 "name": "key0", 00:22:28.672 "path": "/tmp/tmp.U5eqNhvrHB", 00:22:28.672 "method": "keyring_file_add_key", 00:22:28.672 "req_id": 1 00:22:28.672 } 00:22:28.672 Got JSON-RPC error response 00:22:28.672 response: 00:22:28.672 { 00:22:28.672 "code": -1, 00:22:28.672 "message": "Operation not permitted" 00:22:28.672 } 00:22:28.672 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:28.933 [2024-11-26 07:31:56.884297] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:28.933 [2024-11-26 07:31:56.884324] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:28.933 request: 00:22:28.933 { 00:22:28.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.933 "host": "nqn.2016-06.io.spdk:host1", 00:22:28.933 "psk": "key0", 00:22:28.933 "method": "nvmf_subsystem_add_host", 00:22:28.933 "req_id": 1 00:22:28.933 } 00:22:28.933 Got JSON-RPC error response 00:22:28.933 response: 00:22:28.933 { 00:22:28.933 "code": -32603, 00:22:28.933 "message": "Internal error" 00:22:28.933 } 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1479336 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479336 ']' 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479336 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479336 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479336' 00:22:28.933 killing process with pid 1479336 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479336 00:22:28.933 07:31:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479336 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.U5eqNhvrHB 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1479792 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1479792 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1479792 ']' 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.194 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.194 [2024-11-26 07:31:57.157151] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:29.194 [2024-11-26 07:31:57.157215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.194 [2024-11-26 07:31:57.245189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.194 [2024-11-26 07:31:57.276053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.194 [2024-11-26 07:31:57.276085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.194 [2024-11-26 07:31:57.276092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.194 [2024-11-26 07:31:57.276096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.194 [2024-11-26 07:31:57.276101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.194 [2024-11-26 07:31:57.276563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U5eqNhvrHB 00:22:30.137 07:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.137 [2024-11-26 07:31:58.149492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.137 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:30.397 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:30.658 [2024-11-26 07:31:58.518393] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:30.658 [2024-11-26 07:31:58.518597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.658 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:30.658 malloc0 00:22:30.658 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:30.919 07:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1480158 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1480158 /var/tmp/bdevperf.sock 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480158 ']' 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.180 07:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.439 [2024-11-26 07:31:59.314334] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:31.439 [2024-11-26 07:31:59.314387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480158 ] 00:22:31.439 [2024-11-26 07:31:59.403689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.439 [2024-11-26 07:31:59.438516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.010 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.010 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:32.010 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:32.270 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:32.531 [2024-11-26 07:32:00.430188] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.531 TLSTESTn1 00:22:32.531 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:32.792 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:32.792 "subsystems": [ 00:22:32.792 { 00:22:32.792 "subsystem": "keyring", 00:22:32.792 "config": [ 00:22:32.792 { 00:22:32.792 "method": "keyring_file_add_key", 00:22:32.792 "params": { 00:22:32.792 "name": "key0", 00:22:32.792 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:32.792 } 00:22:32.792 } 00:22:32.792 ] 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "subsystem": "iobuf", 00:22:32.792 "config": [ 00:22:32.792 { 00:22:32.792 "method": "iobuf_set_options", 00:22:32.792 "params": { 00:22:32.792 "small_pool_count": 8192, 00:22:32.792 "large_pool_count": 1024, 00:22:32.792 "small_bufsize": 8192, 00:22:32.792 "large_bufsize": 135168, 00:22:32.792 "enable_numa": false 00:22:32.792 } 00:22:32.792 } 00:22:32.792 ] 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "subsystem": "sock", 00:22:32.792 "config": [ 00:22:32.792 { 00:22:32.792 "method": "sock_set_default_impl", 00:22:32.792 "params": { 00:22:32.792 "impl_name": "posix" 00:22:32.792 } 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "method": "sock_impl_set_options", 00:22:32.792 "params": { 00:22:32.792 "impl_name": "ssl", 00:22:32.792 "recv_buf_size": 4096, 00:22:32.792 "send_buf_size": 4096, 00:22:32.792 "enable_recv_pipe": true, 00:22:32.792 "enable_quickack": false, 00:22:32.792 "enable_placement_id": 0, 00:22:32.792 "enable_zerocopy_send_server": true, 00:22:32.792 "enable_zerocopy_send_client": false, 00:22:32.792 "zerocopy_threshold": 0, 00:22:32.792 "tls_version": 0, 00:22:32.792 "enable_ktls": false 00:22:32.792 } 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "method": "sock_impl_set_options", 00:22:32.792 "params": { 00:22:32.792 "impl_name": "posix", 00:22:32.792 "recv_buf_size": 2097152, 00:22:32.792 "send_buf_size": 2097152, 00:22:32.792 "enable_recv_pipe": true, 00:22:32.792 "enable_quickack": false, 00:22:32.792 "enable_placement_id": 0, 00:22:32.792 "enable_zerocopy_send_server": true, 00:22:32.792 "enable_zerocopy_send_client": false, 00:22:32.792 "zerocopy_threshold": 0, 00:22:32.792 "tls_version": 0, 00:22:32.792 "enable_ktls": false 00:22:32.792 } 00:22:32.792 } 00:22:32.792 ] 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "subsystem": "vmd", 00:22:32.792 "config": [] 00:22:32.792 }, 00:22:32.792 { 00:22:32.792 "subsystem": "accel", 00:22:32.792 "config": [ 00:22:32.792 { 00:22:32.792 "method": "accel_set_options", 00:22:32.792 "params": { 00:22:32.792 "small_cache_size": 128, 00:22:32.792 "large_cache_size": 16, 00:22:32.792 "task_count": 2048, 00:22:32.792 "sequence_count": 2048, 00:22:32.792 "buf_count": 2048 00:22:32.792 } 00:22:32.792 } 00:22:32.792 ] 00:22:32.792 }, 00:22:32.792 { 00:22:32.793 "subsystem": "bdev", 00:22:32.793 "config": [ 00:22:32.793 { 00:22:32.793 "method": "bdev_set_options", 00:22:32.793 "params": { 00:22:32.793 "bdev_io_pool_size": 65535, 00:22:32.793 "bdev_io_cache_size": 256, 00:22:32.793 "bdev_auto_examine": true, 00:22:32.793 "iobuf_small_cache_size": 128, 00:22:32.793 "iobuf_large_cache_size": 16 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_raid_set_options", 00:22:32.793 "params": { 00:22:32.793 "process_window_size_kb": 1024, 00:22:32.793 "process_max_bandwidth_mb_sec": 0 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_iscsi_set_options", 00:22:32.793 "params": { 00:22:32.793 "timeout_sec": 30 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_nvme_set_options", 00:22:32.793 "params": { 00:22:32.793 "action_on_timeout": "none", 00:22:32.793 "timeout_us": 0, 00:22:32.793 "timeout_admin_us": 0, 00:22:32.793 "keep_alive_timeout_ms": 10000, 00:22:32.793 "arbitration_burst": 0, 00:22:32.793 "low_priority_weight": 0, 00:22:32.793 "medium_priority_weight": 0, 00:22:32.793 "high_priority_weight": 0, 00:22:32.793 "nvme_adminq_poll_period_us": 10000, 00:22:32.793 "nvme_ioq_poll_period_us": 0, 00:22:32.793 "io_queue_requests": 0, 00:22:32.793 "delay_cmd_submit": true, 00:22:32.793 "transport_retry_count": 4, 00:22:32.793 "bdev_retry_count": 3, 00:22:32.793 "transport_ack_timeout": 0, 00:22:32.793 "ctrlr_loss_timeout_sec": 0, 00:22:32.793 "reconnect_delay_sec": 0, 00:22:32.793 "fast_io_fail_timeout_sec": 0, 00:22:32.793 "disable_auto_failback": false, 00:22:32.793 "generate_uuids": false, 00:22:32.793 "transport_tos": 0, 00:22:32.793 "nvme_error_stat": false, 00:22:32.793 "rdma_srq_size": 0, 00:22:32.793 "io_path_stat": false, 00:22:32.793 "allow_accel_sequence": false, 00:22:32.793 "rdma_max_cq_size": 0, 00:22:32.793 "rdma_cm_event_timeout_ms": 0, 00:22:32.793 "dhchap_digests": [ 00:22:32.793 "sha256", 00:22:32.793 "sha384", 00:22:32.793 "sha512" 00:22:32.793 ], 00:22:32.793 "dhchap_dhgroups": [ 00:22:32.793 "null", 00:22:32.793 "ffdhe2048", 00:22:32.793 "ffdhe3072", 00:22:32.793 "ffdhe4096", 00:22:32.793 "ffdhe6144", 00:22:32.793 "ffdhe8192" 00:22:32.793 ] 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_nvme_set_hotplug", 00:22:32.793 "params": { 00:22:32.793 "period_us": 100000, 00:22:32.793 "enable": false 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_malloc_create", 00:22:32.793 "params": { 00:22:32.793 "name": "malloc0", 00:22:32.793 "num_blocks": 8192, 00:22:32.793 "block_size": 4096, 00:22:32.793 "physical_block_size": 4096, 00:22:32.793 "uuid": "eb7dfd3c-725f-4f01-8de2-792fa509faf8", 00:22:32.793 "optimal_io_boundary": 0, 00:22:32.793 "md_size": 0, 00:22:32.793 "dif_type": 0, 00:22:32.793 "dif_is_head_of_md": false, 00:22:32.793 "dif_pi_format": 0 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "bdev_wait_for_examine" 00:22:32.793 } 00:22:32.793 ] 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "subsystem": "nbd", 00:22:32.793 "config": [] 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "subsystem": "scheduler", 00:22:32.793 "config": [ 00:22:32.793 { 00:22:32.793 "method": "framework_set_scheduler", 00:22:32.793 "params": { 00:22:32.793 "name": "static" 00:22:32.793 } 00:22:32.793 } 00:22:32.793 ] 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "subsystem": "nvmf", 00:22:32.793 "config": [ 00:22:32.793 { 00:22:32.793 "method": "nvmf_set_config", 00:22:32.793 "params": { 00:22:32.793 "discovery_filter": "match_any", 00:22:32.793 "admin_cmd_passthru": { 00:22:32.793 "identify_ctrlr": false 00:22:32.793 }, 00:22:32.793 "dhchap_digests": [ 00:22:32.793 "sha256", 00:22:32.793 "sha384", 00:22:32.793 "sha512" 00:22:32.793 ], 00:22:32.793 "dhchap_dhgroups": [ 00:22:32.793 "null", 00:22:32.793 "ffdhe2048", 00:22:32.793 "ffdhe3072", 00:22:32.793 "ffdhe4096", 00:22:32.793 "ffdhe6144", 00:22:32.793 "ffdhe8192" 00:22:32.793 ] 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_set_max_subsystems", 00:22:32.793 "params": { 00:22:32.793 "max_subsystems": 1024 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_set_crdt", 00:22:32.793 "params": { 00:22:32.793 "crdt1": 0, 00:22:32.793 "crdt2": 0, 00:22:32.793 "crdt3": 0 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_create_transport", 00:22:32.793 "params": { 00:22:32.793 "trtype": "TCP", 00:22:32.793 "max_queue_depth": 128, 00:22:32.793 "max_io_qpairs_per_ctrlr": 127, 00:22:32.793 "in_capsule_data_size": 4096, 00:22:32.793 "max_io_size": 131072, 00:22:32.793 "io_unit_size": 131072, 00:22:32.793 "max_aq_depth": 128, 00:22:32.793 "num_shared_buffers": 511, 00:22:32.793 "buf_cache_size": 4294967295, 00:22:32.793 "dif_insert_or_strip": false, 00:22:32.793 "zcopy": false, 00:22:32.793 "c2h_success": false, 00:22:32.793 "sock_priority": 0, 00:22:32.793 "abort_timeout_sec": 1, 00:22:32.793 "ack_timeout": 0, 00:22:32.793 "data_wr_pool_size": 0 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_create_subsystem", 00:22:32.793 "params": { 00:22:32.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.793 "allow_any_host": false, 00:22:32.793 "serial_number": "SPDK00000000000001", 00:22:32.793 "model_number": "SPDK bdev Controller", 00:22:32.793 "max_namespaces": 10, 00:22:32.793 "min_cntlid": 1, 00:22:32.793 "max_cntlid": 65519, 00:22:32.793 "ana_reporting": false 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_subsystem_add_host", 00:22:32.793 "params": { 00:22:32.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.793 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.793 "psk": "key0" 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_subsystem_add_ns", 00:22:32.793 "params": { 00:22:32.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.793 "namespace": { 00:22:32.793 "nsid": 1, 00:22:32.793 "bdev_name": "malloc0", 00:22:32.793 "nguid": "EB7DFD3C725F4F018DE2792FA509FAF8", 00:22:32.793 "uuid": "eb7dfd3c-725f-4f01-8de2-792fa509faf8", 00:22:32.793 "no_auto_visible": false 00:22:32.793 } 00:22:32.793 } 00:22:32.793 }, 00:22:32.793 { 00:22:32.793 "method": "nvmf_subsystem_add_listener", 00:22:32.794 "params": { 00:22:32.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.794 "listen_address": { 00:22:32.794 "trtype": "TCP", 00:22:32.794 "adrfam": "IPv4", 00:22:32.794 "traddr": "10.0.0.2", 00:22:32.794 "trsvcid": "4420" 00:22:32.794 }, 00:22:32.794 "secure_channel": true 00:22:32.794 } 00:22:32.794 } 00:22:32.794 ] 00:22:32.794 } 00:22:32.794 ] 00:22:32.794 }' 00:22:32.794 07:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:33.053 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:33.053 "subsystems": [ 00:22:33.053 { 00:22:33.053 "subsystem": "keyring", 00:22:33.053 "config": [ 00:22:33.053 { 00:22:33.053 "method": "keyring_file_add_key", 00:22:33.053 "params": { 00:22:33.053 "name": "key0", 00:22:33.053 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:33.053 } 00:22:33.053 } 00:22:33.053 ] 00:22:33.053 }, 00:22:33.053 { 00:22:33.053 "subsystem": "iobuf", 00:22:33.053 "config": [ 00:22:33.053 { 00:22:33.053 "method": "iobuf_set_options", 00:22:33.053 "params": { 00:22:33.053 "small_pool_count": 8192, 00:22:33.053 "large_pool_count": 1024, 00:22:33.053 "small_bufsize": 8192, 00:22:33.054 "large_bufsize": 135168, 00:22:33.054 "enable_numa": false 00:22:33.054 } 00:22:33.054 } 00:22:33.054 ] 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "subsystem": "sock", 00:22:33.054 "config": [ 00:22:33.054 { 00:22:33.054 "method": "sock_set_default_impl", 00:22:33.054 "params": { 00:22:33.054 "impl_name": "posix" 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "sock_impl_set_options", 00:22:33.054 "params": { 00:22:33.054 "impl_name": "ssl", 00:22:33.054 "recv_buf_size": 4096, 00:22:33.054 "send_buf_size": 4096, 00:22:33.054 "enable_recv_pipe": true, 00:22:33.054 "enable_quickack": false, 00:22:33.054 "enable_placement_id": 0, 00:22:33.054 "enable_zerocopy_send_server": true, 00:22:33.054 "enable_zerocopy_send_client": false, 00:22:33.054 "zerocopy_threshold": 0, 00:22:33.054 "tls_version": 0, 00:22:33.054 "enable_ktls": false 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "sock_impl_set_options", 00:22:33.054 "params": { 00:22:33.054 "impl_name": "posix", 00:22:33.054 "recv_buf_size": 2097152, 00:22:33.054 "send_buf_size": 2097152, 00:22:33.054 "enable_recv_pipe": true, 00:22:33.054 "enable_quickack": false, 00:22:33.054 "enable_placement_id": 0, 00:22:33.054 "enable_zerocopy_send_server": true, 00:22:33.054 "enable_zerocopy_send_client": false, 00:22:33.054 "zerocopy_threshold": 0, 00:22:33.054 "tls_version": 0, 00:22:33.054 "enable_ktls": false 00:22:33.054 } 00:22:33.054 } 00:22:33.054 ] 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "subsystem": "vmd", 00:22:33.054 "config": [] 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "subsystem": "accel", 00:22:33.054 "config": [ 00:22:33.054 { 00:22:33.054 "method": "accel_set_options", 00:22:33.054 "params": { 00:22:33.054 "small_cache_size": 128, 00:22:33.054 "large_cache_size": 16, 00:22:33.054 "task_count": 2048, 00:22:33.054 "sequence_count": 2048, 00:22:33.054 "buf_count": 2048 00:22:33.054 } 00:22:33.054 } 00:22:33.054 ] 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "subsystem": "bdev", 00:22:33.054 "config": [ 00:22:33.054 { 00:22:33.054 "method": "bdev_set_options", 00:22:33.054 "params": { 00:22:33.054 "bdev_io_pool_size": 65535, 00:22:33.054 "bdev_io_cache_size": 256, 00:22:33.054 "bdev_auto_examine": true, 00:22:33.054 "iobuf_small_cache_size": 128, 00:22:33.054 "iobuf_large_cache_size": 16 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_raid_set_options", 00:22:33.054 "params": { 00:22:33.054 "process_window_size_kb": 1024, 00:22:33.054 "process_max_bandwidth_mb_sec": 0 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_iscsi_set_options", 00:22:33.054 "params": { 00:22:33.054 "timeout_sec": 30 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_nvme_set_options", 00:22:33.054 "params": { 00:22:33.054 "action_on_timeout": "none", 00:22:33.054 "timeout_us": 0, 00:22:33.054 "timeout_admin_us": 0, 00:22:33.054 "keep_alive_timeout_ms": 10000, 00:22:33.054 "arbitration_burst": 0, 00:22:33.054 "low_priority_weight": 0, 00:22:33.054 "medium_priority_weight": 0, 00:22:33.054 "high_priority_weight": 0, 00:22:33.054 "nvme_adminq_poll_period_us": 10000, 00:22:33.054 "nvme_ioq_poll_period_us": 0, 00:22:33.054 "io_queue_requests": 512, 00:22:33.054 "delay_cmd_submit": true, 00:22:33.054 "transport_retry_count": 4, 00:22:33.054 "bdev_retry_count": 3, 00:22:33.054 "transport_ack_timeout": 0, 00:22:33.054 "ctrlr_loss_timeout_sec": 0, 00:22:33.054 "reconnect_delay_sec": 0, 00:22:33.054 "fast_io_fail_timeout_sec": 0, 00:22:33.054 "disable_auto_failback": false, 00:22:33.054 "generate_uuids": false, 00:22:33.054 "transport_tos": 0, 00:22:33.054 "nvme_error_stat": false, 00:22:33.054 "rdma_srq_size": 0, 00:22:33.054 "io_path_stat": false, 00:22:33.054 "allow_accel_sequence": false, 00:22:33.054 "rdma_max_cq_size": 0, 00:22:33.054 "rdma_cm_event_timeout_ms": 0, 00:22:33.054 "dhchap_digests": [ 00:22:33.054 "sha256", 00:22:33.054 "sha384", 00:22:33.054 "sha512" 00:22:33.054 ], 00:22:33.054 "dhchap_dhgroups": [ 00:22:33.054 "null", 00:22:33.054 "ffdhe2048", 00:22:33.054 "ffdhe3072", 00:22:33.054 "ffdhe4096", 00:22:33.054 "ffdhe6144", 00:22:33.054 "ffdhe8192" 00:22:33.054 ] 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_nvme_attach_controller", 00:22:33.054 "params": { 00:22:33.054 "name": "TLSTEST", 00:22:33.054 "trtype": "TCP", 00:22:33.054 "adrfam": "IPv4", 00:22:33.054 "traddr": "10.0.0.2", 00:22:33.054 "trsvcid": "4420", 00:22:33.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.054 "prchk_reftag": false, 00:22:33.054 "prchk_guard": false, 00:22:33.054 "ctrlr_loss_timeout_sec": 0, 00:22:33.054 "reconnect_delay_sec": 0, 00:22:33.054 "fast_io_fail_timeout_sec": 0, 00:22:33.054 "psk": "key0", 00:22:33.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.054 "hdgst": false, 00:22:33.054 "ddgst": false, 00:22:33.054 "multipath": "multipath" 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_nvme_set_hotplug", 00:22:33.054 "params": { 00:22:33.054 "period_us": 100000, 00:22:33.054 "enable": false 00:22:33.054 } 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "method": "bdev_wait_for_examine" 00:22:33.054 } 00:22:33.054 ] 00:22:33.054 }, 00:22:33.054 { 00:22:33.054 "subsystem": "nbd", 00:22:33.054 "config": [] 00:22:33.054 } 00:22:33.054 ] 00:22:33.054 }' 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1480158 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480158 ']' 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480158 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480158 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480158' 00:22:33.054 killing process with pid 1480158 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480158 00:22:33.054 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.054 00:22:33.054 Latency(us) 00:22:33.054 [2024-11-26T06:32:01.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.054 [2024-11-26T06:32:01.152Z] =================================================================================================================== 00:22:33.054 [2024-11-26T06:32:01.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.054 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480158 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1479792 ']' 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1479792' 00:22:33.316 killing process with pid 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1479792 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.316 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:33.316 "subsystems": [ 00:22:33.316 { 00:22:33.316 "subsystem": "keyring", 00:22:33.316 "config": [ 00:22:33.316 { 00:22:33.316 "method": "keyring_file_add_key", 00:22:33.316 "params": { 00:22:33.316 "name": "key0", 00:22:33.316 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:33.316 } 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "subsystem": "iobuf", 00:22:33.316 "config": [ 00:22:33.316 { 00:22:33.316 "method": "iobuf_set_options", 00:22:33.316 "params": { 00:22:33.316 "small_pool_count": 8192, 00:22:33.316 "large_pool_count": 1024, 00:22:33.316 "small_bufsize": 8192, 00:22:33.316 "large_bufsize": 135168, 00:22:33.316 "enable_numa": false 00:22:33.316 } 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "subsystem": "sock", 00:22:33.316 "config": [ 00:22:33.316 { 00:22:33.316 "method": "sock_set_default_impl", 00:22:33.316 "params": { 00:22:33.316 "impl_name": "posix" 00:22:33.316 } 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "method": "sock_impl_set_options", 00:22:33.316 "params": { 00:22:33.316 "impl_name": "ssl", 00:22:33.316 "recv_buf_size": 4096, 00:22:33.316 "send_buf_size": 4096, 00:22:33.316 "enable_recv_pipe": true, 00:22:33.316 "enable_quickack": false, 00:22:33.316 "enable_placement_id": 0, 00:22:33.316 "enable_zerocopy_send_server": true, 00:22:33.316 "enable_zerocopy_send_client": false, 00:22:33.316 "zerocopy_threshold": 0, 00:22:33.316 "tls_version": 0, 00:22:33.316 "enable_ktls": false 00:22:33.316 } 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "method": "sock_impl_set_options", 00:22:33.316 "params": { 00:22:33.316 "impl_name": "posix", 00:22:33.316 "recv_buf_size": 2097152, 00:22:33.316 "send_buf_size": 2097152, 00:22:33.316 "enable_recv_pipe": true, 00:22:33.316 "enable_quickack": false, 00:22:33.316 "enable_placement_id": 0, 00:22:33.316 "enable_zerocopy_send_server": true, 00:22:33.316 "enable_zerocopy_send_client": false, 00:22:33.316 "zerocopy_threshold": 0, 00:22:33.316 "tls_version": 0, 00:22:33.316 "enable_ktls": false 00:22:33.316 } 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "subsystem": "vmd", 00:22:33.316 "config": [] 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "subsystem": "accel", 00:22:33.316 "config": [ 00:22:33.316 { 00:22:33.316 "method": "accel_set_options", 00:22:33.316 "params": { 00:22:33.316 "small_cache_size": 128, 00:22:33.316 "large_cache_size": 16, 00:22:33.316 "task_count": 2048, 00:22:33.316 "sequence_count": 2048, 00:22:33.316 "buf_count": 2048 00:22:33.316 } 00:22:33.316 } 00:22:33.316 ] 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "subsystem": "bdev", 00:22:33.316 "config": [ 00:22:33.316 { 00:22:33.316 "method": "bdev_set_options", 00:22:33.316 "params": { 00:22:33.316 "bdev_io_pool_size": 65535, 00:22:33.316 "bdev_io_cache_size": 256, 00:22:33.316 "bdev_auto_examine": true, 00:22:33.316 "iobuf_small_cache_size": 128, 00:22:33.316 "iobuf_large_cache_size": 16 00:22:33.316 } 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "method": "bdev_raid_set_options", 00:22:33.316 "params": { 00:22:33.316 "process_window_size_kb": 1024, 00:22:33.316 "process_max_bandwidth_mb_sec": 0 00:22:33.316 } 00:22:33.316 }, 00:22:33.316 { 00:22:33.316 "method": "bdev_iscsi_set_options", 00:22:33.316 "params": { 00:22:33.316 "timeout_sec": 30 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "bdev_nvme_set_options", 00:22:33.317 "params": { 00:22:33.317 "action_on_timeout": "none", 00:22:33.317 "timeout_us": 0, 00:22:33.317 "timeout_admin_us": 0, 00:22:33.317 "keep_alive_timeout_ms": 10000, 00:22:33.317 "arbitration_burst": 0, 00:22:33.317 "low_priority_weight": 0, 00:22:33.317 "medium_priority_weight": 0, 00:22:33.317 "high_priority_weight": 0, 00:22:33.317 "nvme_adminq_poll_period_us": 10000, 00:22:33.317 "nvme_ioq_poll_period_us": 0, 00:22:33.317 "io_queue_requests": 0, 00:22:33.317 "delay_cmd_submit": true, 00:22:33.317 "transport_retry_count": 4, 00:22:33.317 "bdev_retry_count": 3, 00:22:33.317 "transport_ack_timeout": 0, 00:22:33.317 "ctrlr_loss_timeout_sec": 0, 00:22:33.317 "reconnect_delay_sec": 0, 00:22:33.317 "fast_io_fail_timeout_sec": 0, 00:22:33.317 "disable_auto_failback": false, 00:22:33.317 "generate_uuids": false, 00:22:33.317 "transport_tos": 0, 00:22:33.317 "nvme_error_stat": false, 00:22:33.317 "rdma_srq_size": 0, 00:22:33.317 "io_path_stat": false, 00:22:33.317 "allow_accel_sequence": false, 00:22:33.317 "rdma_max_cq_size": 0, 00:22:33.317 "rdma_cm_event_timeout_ms": 0, 00:22:33.317 "dhchap_digests": [ 00:22:33.317 "sha256", 00:22:33.317 "sha384", 00:22:33.317 "sha512" 00:22:33.317 ], 00:22:33.317 "dhchap_dhgroups": [ 00:22:33.317 "null", 00:22:33.317 "ffdhe2048", 00:22:33.317 "ffdhe3072", 00:22:33.317 "ffdhe4096", 00:22:33.317 "ffdhe6144", 00:22:33.317 "ffdhe8192" 00:22:33.317 ] 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "bdev_nvme_set_hotplug", 00:22:33.317 "params": { 00:22:33.317 "period_us": 100000, 00:22:33.317 "enable": false 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "bdev_malloc_create", 00:22:33.317 "params": { 00:22:33.317 "name": "malloc0", 00:22:33.317 "num_blocks": 8192, 00:22:33.317 "block_size": 4096, 00:22:33.317 "physical_block_size": 4096, 00:22:33.317 "uuid": "eb7dfd3c-725f-4f01-8de2-792fa509faf8", 00:22:33.317 "optimal_io_boundary": 0, 00:22:33.317 "md_size": 0, 00:22:33.317 "dif_type": 0, 00:22:33.317 "dif_is_head_of_md": false, 00:22:33.317 "dif_pi_format": 0 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "bdev_wait_for_examine" 00:22:33.317 } 00:22:33.317 ] 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "subsystem": "nbd", 00:22:33.317 "config": [] 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "subsystem": "scheduler", 00:22:33.317 "config": [ 00:22:33.317 { 00:22:33.317 "method": "framework_set_scheduler", 00:22:33.317 "params": { 00:22:33.317 "name": "static" 00:22:33.317 } 00:22:33.317 } 00:22:33.317 ] 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "subsystem": "nvmf", 00:22:33.317 "config": [ 00:22:33.317 { 00:22:33.317 "method": "nvmf_set_config", 00:22:33.317 "params": { 00:22:33.317 "discovery_filter": "match_any", 00:22:33.317 "admin_cmd_passthru": { 00:22:33.317 "identify_ctrlr": false 00:22:33.317 }, 00:22:33.317 "dhchap_digests": [ 00:22:33.317 "sha256", 00:22:33.317 "sha384", 00:22:33.317 "sha512" 00:22:33.317 ], 00:22:33.317 "dhchap_dhgroups": [ 00:22:33.317 "null", 00:22:33.317 "ffdhe2048", 00:22:33.317 "ffdhe3072", 00:22:33.317 "ffdhe4096", 00:22:33.317 "ffdhe6144", 00:22:33.317 "ffdhe8192" 00:22:33.317 ] 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_set_max_subsystems", 00:22:33.317 "params": { 00:22:33.317 "max_subsystems": 1024 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_set_crdt", 00:22:33.317 "params": { 00:22:33.317 "crdt1": 0, 00:22:33.317 "crdt2": 0, 00:22:33.317 "crdt3": 0 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_create_transport", 00:22:33.317 "params": { 00:22:33.317 "trtype": "TCP", 00:22:33.317 "max_queue_depth": 128, 00:22:33.317 "max_io_qpairs_per_ctrlr": 127, 00:22:33.317 "in_capsule_data_size": 4096, 00:22:33.317 "max_io_size": 131072, 00:22:33.317 "io_unit_size": 131072, 00:22:33.317 "max_aq_depth": 128, 00:22:33.317 "num_shared_buffers": 511, 00:22:33.317 "buf_cache_size": 4294967295, 00:22:33.317 "dif_insert_or_strip": false, 00:22:33.317 "zcopy": false, 00:22:33.317 "c2h_success": false, 00:22:33.317 "sock_priority": 0, 00:22:33.317 "abort_timeout_sec": 1, 00:22:33.317 "ack_timeout": 0, 00:22:33.317 "data_wr_pool_size": 0 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_create_subsystem", 00:22:33.317 "params": { 00:22:33.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.317 "allow_any_host": false, 00:22:33.317 "serial_number": "SPDK00000000000001", 00:22:33.317 "model_number": "SPDK bdev Controller", 00:22:33.317 "max_namespaces": 10, 00:22:33.317 "min_cntlid": 1, 00:22:33.317 "max_cntlid": 65519, 00:22:33.317 "ana_reporting": false 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_subsystem_add_host", 00:22:33.317 "params": { 00:22:33.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.317 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.317 "psk": "key0" 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_subsystem_add_ns", 00:22:33.317 "params": { 00:22:33.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.317 "namespace": { 00:22:33.317 "nsid": 1, 00:22:33.317 "bdev_name": "malloc0", 00:22:33.317 "nguid": "EB7DFD3C725F4F018DE2792FA509FAF8", 00:22:33.317 "uuid": "eb7dfd3c-725f-4f01-8de2-792fa509faf8", 00:22:33.317 "no_auto_visible": false 00:22:33.317 } 00:22:33.317 } 00:22:33.317 }, 00:22:33.317 { 00:22:33.317 "method": "nvmf_subsystem_add_listener", 00:22:33.317 "params": { 00:22:33.317 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.317 "listen_address": { 00:22:33.317 "trtype": "TCP", 00:22:33.317 "adrfam": "IPv4", 00:22:33.317 "traddr": "10.0.0.2", 00:22:33.317 "trsvcid": "4420" 00:22:33.317 }, 00:22:33.317 "secure_channel": true 00:22:33.317 } 00:22:33.317 } 00:22:33.317 ] 00:22:33.317 } 00:22:33.317 ] 00:22:33.317 }' 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1480615 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1480615 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480615 ']' 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.317 07:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.578 [2024-11-26 07:32:01.457802] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:33.578 [2024-11-26 07:32:01.457888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.578 [2024-11-26 07:32:01.550870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.578 [2024-11-26 07:32:01.584102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.578 [2024-11-26 07:32:01.584137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.578 [2024-11-26 07:32:01.584143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.578 [2024-11-26 07:32:01.584148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.578 [2024-11-26 07:32:01.584152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.578 [2024-11-26 07:32:01.584682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.845 [2024-11-26 07:32:01.777466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.845 [2024-11-26 07:32:01.809494] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.845 [2024-11-26 07:32:01.809707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1480865 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1480865 /var/tmp/bdevperf.sock 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1480865 ']' 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.417 07:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:34.417 "subsystems": [ 00:22:34.417 { 00:22:34.417 "subsystem": "keyring", 00:22:34.417 "config": [ 00:22:34.417 { 00:22:34.417 "method": "keyring_file_add_key", 00:22:34.417 "params": { 00:22:34.417 "name": "key0", 00:22:34.417 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:34.417 } 00:22:34.417 } 00:22:34.417 ] 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "subsystem": "iobuf", 00:22:34.417 "config": [ 00:22:34.417 { 00:22:34.417 "method": "iobuf_set_options", 00:22:34.417 "params": { 00:22:34.417 "small_pool_count": 8192, 00:22:34.417 "large_pool_count": 1024, 00:22:34.417 "small_bufsize": 8192, 00:22:34.417 "large_bufsize": 135168, 00:22:34.417 "enable_numa": false 00:22:34.417 } 00:22:34.417 } 00:22:34.417 ] 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "subsystem": "sock", 00:22:34.417 "config": [ 00:22:34.417 { 00:22:34.417 "method": "sock_set_default_impl", 00:22:34.417 "params": { 00:22:34.417 "impl_name": "posix" 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "sock_impl_set_options", 00:22:34.417 "params": { 00:22:34.417 "impl_name": "ssl", 00:22:34.417 "recv_buf_size": 4096, 00:22:34.417 "send_buf_size": 4096, 00:22:34.417 "enable_recv_pipe": true, 00:22:34.417 "enable_quickack": false, 00:22:34.417 "enable_placement_id": 0, 00:22:34.417 "enable_zerocopy_send_server": true, 00:22:34.417 "enable_zerocopy_send_client": false, 00:22:34.417 "zerocopy_threshold": 0, 00:22:34.417 "tls_version": 0, 00:22:34.417 "enable_ktls": false 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "sock_impl_set_options", 00:22:34.417 "params": { 00:22:34.417 "impl_name": "posix", 00:22:34.417 "recv_buf_size": 2097152, 00:22:34.417 "send_buf_size": 2097152, 00:22:34.417 "enable_recv_pipe": true, 00:22:34.417 "enable_quickack": false, 00:22:34.417 "enable_placement_id": 0, 00:22:34.417 "enable_zerocopy_send_server": true, 00:22:34.417 "enable_zerocopy_send_client": false, 00:22:34.417 "zerocopy_threshold": 0, 00:22:34.417 "tls_version": 0, 00:22:34.417 "enable_ktls": false 00:22:34.417 } 00:22:34.417 } 00:22:34.417 ] 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "subsystem": "vmd", 00:22:34.417 "config": [] 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "subsystem": "accel", 00:22:34.417 "config": [ 00:22:34.417 { 00:22:34.417 "method": "accel_set_options", 00:22:34.417 "params": { 00:22:34.417 "small_cache_size": 128, 00:22:34.417 "large_cache_size": 16, 00:22:34.417 "task_count": 2048, 00:22:34.417 "sequence_count": 2048, 00:22:34.417 "buf_count": 2048 00:22:34.417 } 00:22:34.417 } 00:22:34.417 ] 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "subsystem": "bdev", 00:22:34.417 "config": [ 00:22:34.417 { 00:22:34.417 "method": "bdev_set_options", 00:22:34.417 "params": { 00:22:34.417 "bdev_io_pool_size": 65535, 00:22:34.417 "bdev_io_cache_size": 256, 00:22:34.417 "bdev_auto_examine": true, 00:22:34.417 "iobuf_small_cache_size": 128, 00:22:34.417 "iobuf_large_cache_size": 16 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "bdev_raid_set_options", 00:22:34.417 "params": { 00:22:34.417 "process_window_size_kb": 1024, 00:22:34.417 "process_max_bandwidth_mb_sec": 0 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "bdev_iscsi_set_options", 00:22:34.417 "params": { 00:22:34.417 "timeout_sec": 30 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "bdev_nvme_set_options", 00:22:34.417 "params": { 00:22:34.417 "action_on_timeout": "none", 00:22:34.417 "timeout_us": 0, 00:22:34.417 "timeout_admin_us": 0, 00:22:34.417 "keep_alive_timeout_ms": 10000, 00:22:34.417 "arbitration_burst": 0, 00:22:34.417 "low_priority_weight": 0, 00:22:34.417 "medium_priority_weight": 0, 00:22:34.417 "high_priority_weight": 0, 00:22:34.417 "nvme_adminq_poll_period_us": 10000, 00:22:34.417 "nvme_ioq_poll_period_us": 0, 00:22:34.417 "io_queue_requests": 512, 00:22:34.417 "delay_cmd_submit": true, 00:22:34.417 "transport_retry_count": 4, 00:22:34.417 "bdev_retry_count": 3, 00:22:34.417 "transport_ack_timeout": 0, 00:22:34.417 "ctrlr_loss_timeout_sec": 0, 00:22:34.417 "reconnect_delay_sec": 0, 00:22:34.417 "fast_io_fail_timeout_sec": 0, 00:22:34.417 "disable_auto_failback": false, 00:22:34.417 "generate_uuids": false, 00:22:34.417 "transport_tos": 0, 00:22:34.417 "nvme_error_stat": false, 00:22:34.417 "rdma_srq_size": 0, 00:22:34.417 "io_path_stat": false, 00:22:34.417 "allow_accel_sequence": false, 00:22:34.417 "rdma_max_cq_size": 0, 00:22:34.417 "rdma_cm_event_timeout_ms": 0, 00:22:34.417 "dhchap_digests": [ 00:22:34.417 "sha256", 00:22:34.417 "sha384", 00:22:34.417 "sha512" 00:22:34.417 ], 00:22:34.417 "dhchap_dhgroups": [ 00:22:34.417 "null", 00:22:34.417 "ffdhe2048", 00:22:34.417 "ffdhe3072", 00:22:34.417 "ffdhe4096", 00:22:34.417 "ffdhe6144", 00:22:34.417 "ffdhe8192" 00:22:34.417 ] 00:22:34.417 } 00:22:34.417 }, 00:22:34.417 { 00:22:34.417 "method": "bdev_nvme_attach_controller", 00:22:34.417 "params": { 00:22:34.417 "name": "TLSTEST", 00:22:34.417 "trtype": "TCP", 00:22:34.417 "adrfam": "IPv4", 00:22:34.417 "traddr": "10.0.0.2", 00:22:34.417 "trsvcid": "4420", 00:22:34.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.417 "prchk_reftag": false, 00:22:34.417 "prchk_guard": false, 00:22:34.417 "ctrlr_loss_timeout_sec": 0, 00:22:34.418 "reconnect_delay_sec": 0, 00:22:34.418 "fast_io_fail_timeout_sec": 0, 00:22:34.418 "psk": "key0", 00:22:34.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.418 "hdgst": false, 00:22:34.418 "ddgst": false, 00:22:34.418 "multipath": "multipath" 00:22:34.418 } 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "method": "bdev_nvme_set_hotplug", 00:22:34.418 "params": { 00:22:34.418 "period_us": 100000, 00:22:34.418 "enable": false 00:22:34.418 } 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "method": "bdev_wait_for_examine" 00:22:34.418 } 00:22:34.418 ] 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "subsystem": "nbd", 00:22:34.418 "config": [] 00:22:34.418 } 00:22:34.418 ] 00:22:34.418 }' 00:22:34.418 [2024-11-26 07:32:02.350664] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:34.418 [2024-11-26 07:32:02.350718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480865 ] 00:22:34.418 [2024-11-26 07:32:02.439370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.418 [2024-11-26 07:32:02.474488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.678 [2024-11-26 07:32:02.613986] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.251 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.251 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:35.251 07:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:35.251 Running I/O for 10 seconds... 00:22:37.582 6049.00 IOPS, 23.63 MiB/s [2024-11-26T06:32:06.254Z] 6002.00 IOPS, 23.45 MiB/s [2024-11-26T06:32:07.640Z] 6104.67 IOPS, 23.85 MiB/s [2024-11-26T06:32:08.585Z] 6151.00 IOPS, 24.03 MiB/s [2024-11-26T06:32:09.529Z] 6229.80 IOPS, 24.34 MiB/s [2024-11-26T06:32:10.474Z] 6281.67 IOPS, 24.54 MiB/s [2024-11-26T06:32:11.417Z] 6304.57 IOPS, 24.63 MiB/s [2024-11-26T06:32:12.408Z] 6336.75 IOPS, 24.75 MiB/s [2024-11-26T06:32:13.413Z] 6363.22 IOPS, 24.86 MiB/s [2024-11-26T06:32:13.413Z] 6374.80 IOPS, 24.90 MiB/s 00:22:45.315 Latency(us) 00:22:45.315 [2024-11-26T06:32:13.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.315 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:45.315 Verification LBA range: start 0x0 length 0x2000 00:22:45.315 TLSTESTn1 : 10.01 6379.33 24.92 0.00 0.00 20031.81 5133.65 25231.36 00:22:45.315 [2024-11-26T06:32:13.413Z] =================================================================================================================== 00:22:45.315 [2024-11-26T06:32:13.413Z] Total : 6379.33 24.92 0.00 0.00 20031.81 5133.65 25231.36 00:22:45.315 { 00:22:45.315 "results": [ 00:22:45.315 { 00:22:45.315 "job": "TLSTESTn1", 00:22:45.315 "core_mask": "0x4", 00:22:45.315 "workload": "verify", 00:22:45.315 "status": "finished", 00:22:45.315 "verify_range": { 00:22:45.315 "start": 0, 00:22:45.315 "length": 8192 00:22:45.315 }, 00:22:45.315 "queue_depth": 128, 00:22:45.315 "io_size": 4096, 00:22:45.315 "runtime": 10.012651, 00:22:45.315 "iops": 6379.3295102366, 00:22:45.315 "mibps": 24.91925589936172, 00:22:45.315 "io_failed": 0, 00:22:45.315 "io_timeout": 0, 00:22:45.315 "avg_latency_us": 20031.81416747555, 00:22:45.315 "min_latency_us": 5133.653333333334, 00:22:45.315 "max_latency_us": 25231.36 00:22:45.315 } 00:22:45.315 ], 00:22:45.315 "core_count": 1 00:22:45.315 } 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1480865 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480865 ']' 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480865 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480865 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:45.315 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:45.316 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480865' 00:22:45.316 killing process with pid 1480865 00:22:45.316 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480865 00:22:45.316 Received shutdown signal, test time was about 10.000000 seconds 00:22:45.316 00:22:45.316 Latency(us) 00:22:45.316 [2024-11-26T06:32:13.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.316 [2024-11-26T06:32:13.414Z] =================================================================================================================== 00:22:45.316 [2024-11-26T06:32:13.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.316 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480865 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1480615 ']' 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480615' 00:22:45.577 killing process with pid 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1480615 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1483005 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1483005 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483005 ']' 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.577 07:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:45.839 [2024-11-26 07:32:13.702889] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:45.839 [2024-11-26 07:32:13.702945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.839 [2024-11-26 07:32:13.798287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.839 [2024-11-26 07:32:13.846515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.839 [2024-11-26 07:32:13.846572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.839 [2024-11-26 07:32:13.846582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.839 [2024-11-26 07:32:13.846589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.839 [2024-11-26 07:32:13.846595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.839 [2024-11-26 07:32:13.847384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.U5eqNhvrHB 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.U5eqNhvrHB 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:46.785 [2024-11-26 07:32:14.735865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.785 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:47.048 07:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:47.048 [2024-11-26 07:32:15.120829] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.048 [2024-11-26 07:32:15.121168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.310 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:47.310 malloc0 00:22:47.310 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:47.572 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1483581 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1483581 /var/tmp/bdevperf.sock 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483581 ']' 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.834 07:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.096 [2024-11-26 07:32:15.971068] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:48.096 [2024-11-26 07:32:15.971137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483581 ] 00:22:48.096 [2024-11-26 07:32:16.054899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.096 [2024-11-26 07:32:16.085100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.040 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.040 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:49.040 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:49.040 07:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:49.040 [2024-11-26 07:32:17.100847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:49.300 nvme0n1 00:22:49.300 07:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.300 Running I/O for 1 seconds... 00:22:50.501 5282.00 IOPS, 20.63 MiB/s 00:22:50.501 Latency(us) 00:22:50.501 [2024-11-26T06:32:18.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.501 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:50.501 Verification LBA range: start 0x0 length 0x2000 00:22:50.501 nvme0n1 : 1.05 5167.12 20.18 0.00 0.00 24262.84 7864.32 52865.71 00:22:50.501 [2024-11-26T06:32:18.599Z] =================================================================================================================== 00:22:50.501 [2024-11-26T06:32:18.599Z] Total : 5167.12 20.18 0.00 0.00 24262.84 7864.32 52865.71 00:22:50.501 { 00:22:50.501 "results": [ 00:22:50.501 { 00:22:50.501 "job": "nvme0n1", 00:22:50.501 "core_mask": "0x2", 00:22:50.501 "workload": "verify", 00:22:50.501 "status": "finished", 00:22:50.501 "verify_range": { 00:22:50.501 "start": 0, 00:22:50.501 "length": 8192 00:22:50.501 }, 00:22:50.501 "queue_depth": 128, 00:22:50.501 "io_size": 4096, 00:22:50.501 "runtime": 1.047005, 00:22:50.501 "iops": 5167.119545751931, 00:22:50.501 "mibps": 20.18406072559348, 00:22:50.501 "io_failed": 0, 00:22:50.501 "io_timeout": 0, 00:22:50.501 "avg_latency_us": 24262.8414935305, 00:22:50.501 "min_latency_us": 7864.32, 00:22:50.501 "max_latency_us": 52865.706666666665 00:22:50.501 } 00:22:50.501 ], 00:22:50.501 "core_count": 1 00:22:50.501 } 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1483581 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483581 ']' 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483581 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483581 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:50.501 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483581' 00:22:50.502 killing process with pid 1483581 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483581 00:22:50.502 Received shutdown signal, test time was about 1.000000 seconds 00:22:50.502 00:22:50.502 Latency(us) 00:22:50.502 [2024-11-26T06:32:18.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.502 [2024-11-26T06:32:18.600Z] =================================================================================================================== 00:22:50.502 [2024-11-26T06:32:18.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483581 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1483005 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483005 ']' 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483005 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483005 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483005' 00:22:50.502 killing process with pid 1483005 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483005 00:22:50.502 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483005 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1483969 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1483969 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1483969 ']' 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.763 07:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.763 [2024-11-26 07:32:18.783001] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:50.763 [2024-11-26 07:32:18.783057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.024 [2024-11-26 07:32:18.881776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.024 [2024-11-26 07:32:18.932218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.024 [2024-11-26 07:32:18.932274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.024 [2024-11-26 07:32:18.932283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.024 [2024-11-26 07:32:18.932290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.024 [2024-11-26 07:32:18.932296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.024 [2024-11-26 07:32:18.933035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.597 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.597 [2024-11-26 07:32:19.659301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.597 malloc0 00:22:51.597 [2024-11-26 07:32:19.689560] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.597 [2024-11-26 07:32:19.689911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1484294 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1484294 /var/tmp/bdevperf.sock 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1484294 ']' 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.857 07:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.857 [2024-11-26 07:32:19.772388] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:51.857 [2024-11-26 07:32:19.772459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484294 ] 00:22:51.857 [2024-11-26 07:32:19.860249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.857 [2024-11-26 07:32:19.894459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.800 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.800 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:52.800 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.U5eqNhvrHB 00:22:52.800 07:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.061 [2024-11-26 07:32:20.900526] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.061 nvme0n1 00:22:53.061 07:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.061 Running I/O for 1 seconds... 00:22:54.266 5853.00 IOPS, 22.86 MiB/s 00:22:54.266 Latency(us) 00:22:54.266 [2024-11-26T06:32:22.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.266 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.266 Verification LBA range: start 0x0 length 0x2000 00:22:54.266 nvme0n1 : 1.01 5915.53 23.11 0.00 0.00 21509.25 4177.92 21626.88 00:22:54.266 [2024-11-26T06:32:22.364Z] =================================================================================================================== 00:22:54.266 [2024-11-26T06:32:22.364Z] Total : 5915.53 23.11 0.00 0.00 21509.25 4177.92 21626.88 00:22:54.266 { 00:22:54.266 "results": [ 00:22:54.266 { 00:22:54.266 "job": "nvme0n1", 00:22:54.266 "core_mask": "0x2", 00:22:54.266 "workload": "verify", 00:22:54.266 "status": "finished", 00:22:54.266 "verify_range": { 00:22:54.266 "start": 0, 00:22:54.266 "length": 8192 00:22:54.266 }, 00:22:54.266 "queue_depth": 128, 00:22:54.266 "io_size": 4096, 00:22:54.266 "runtime": 1.011067, 00:22:54.266 "iops": 5915.532798518792, 00:22:54.266 "mibps": 23.107549994214033, 00:22:54.266 "io_failed": 0, 00:22:54.266 "io_timeout": 0, 00:22:54.266 "avg_latency_us": 21509.245838488547, 00:22:54.266 "min_latency_us": 4177.92, 00:22:54.266 "max_latency_us": 21626.88 00:22:54.266 } 00:22:54.266 ], 00:22:54.266 "core_count": 1 00:22:54.266 } 00:22:54.266 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:54.266 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.266 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.266 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.266 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:54.266 "subsystems": [ 00:22:54.266 { 00:22:54.266 "subsystem": "keyring", 00:22:54.266 "config": [ 00:22:54.266 { 00:22:54.266 "method": "keyring_file_add_key", 00:22:54.266 "params": { 00:22:54.266 "name": "key0", 00:22:54.266 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:54.266 } 00:22:54.266 } 00:22:54.266 ] 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "subsystem": "iobuf", 00:22:54.266 "config": [ 00:22:54.266 { 00:22:54.266 "method": "iobuf_set_options", 00:22:54.266 "params": { 00:22:54.266 "small_pool_count": 8192, 00:22:54.266 "large_pool_count": 1024, 00:22:54.266 "small_bufsize": 8192, 00:22:54.266 "large_bufsize": 135168, 00:22:54.266 "enable_numa": false 00:22:54.266 } 00:22:54.266 } 00:22:54.266 ] 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "subsystem": "sock", 00:22:54.266 "config": [ 00:22:54.266 { 00:22:54.266 "method": "sock_set_default_impl", 00:22:54.266 "params": { 00:22:54.266 "impl_name": "posix" 00:22:54.266 } 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "method": "sock_impl_set_options", 00:22:54.266 "params": { 00:22:54.266 "impl_name": "ssl", 00:22:54.266 "recv_buf_size": 4096, 00:22:54.266 "send_buf_size": 4096, 00:22:54.266 "enable_recv_pipe": true, 00:22:54.266 "enable_quickack": false, 00:22:54.266 "enable_placement_id": 0, 00:22:54.266 "enable_zerocopy_send_server": true, 00:22:54.266 "enable_zerocopy_send_client": false, 00:22:54.266 "zerocopy_threshold": 0, 00:22:54.266 "tls_version": 0, 00:22:54.266 "enable_ktls": false 00:22:54.266 } 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "method": "sock_impl_set_options", 00:22:54.266 "params": { 00:22:54.266 "impl_name": "posix", 00:22:54.266 "recv_buf_size": 2097152, 00:22:54.266 "send_buf_size": 2097152, 00:22:54.266 "enable_recv_pipe": true, 00:22:54.266 "enable_quickack": false, 00:22:54.266 "enable_placement_id": 0, 00:22:54.266 "enable_zerocopy_send_server": true, 00:22:54.266 "enable_zerocopy_send_client": false, 00:22:54.266 "zerocopy_threshold": 0, 00:22:54.266 "tls_version": 0, 00:22:54.266 "enable_ktls": false 00:22:54.266 } 00:22:54.266 } 00:22:54.266 ] 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "subsystem": "vmd", 00:22:54.266 "config": [] 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "subsystem": "accel", 00:22:54.266 "config": [ 00:22:54.266 { 00:22:54.266 "method": "accel_set_options", 00:22:54.266 "params": { 00:22:54.266 "small_cache_size": 128, 00:22:54.266 "large_cache_size": 16, 00:22:54.266 "task_count": 2048, 00:22:54.266 "sequence_count": 2048, 00:22:54.266 "buf_count": 2048 00:22:54.266 } 00:22:54.266 } 00:22:54.266 ] 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "subsystem": "bdev", 00:22:54.266 "config": [ 00:22:54.266 { 00:22:54.266 "method": "bdev_set_options", 00:22:54.266 "params": { 00:22:54.266 "bdev_io_pool_size": 65535, 00:22:54.266 "bdev_io_cache_size": 256, 00:22:54.266 "bdev_auto_examine": true, 00:22:54.266 "iobuf_small_cache_size": 128, 00:22:54.266 "iobuf_large_cache_size": 16 00:22:54.266 } 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "method": "bdev_raid_set_options", 00:22:54.266 "params": { 00:22:54.266 "process_window_size_kb": 1024, 00:22:54.266 "process_max_bandwidth_mb_sec": 0 00:22:54.266 } 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "method": "bdev_iscsi_set_options", 00:22:54.266 "params": { 00:22:54.266 "timeout_sec": 30 00:22:54.266 } 00:22:54.266 }, 00:22:54.266 { 00:22:54.266 "method": "bdev_nvme_set_options", 00:22:54.266 "params": { 00:22:54.266 "action_on_timeout": "none", 00:22:54.266 "timeout_us": 0, 00:22:54.266 "timeout_admin_us": 0, 00:22:54.266 "keep_alive_timeout_ms": 10000, 00:22:54.266 "arbitration_burst": 0, 00:22:54.266 "low_priority_weight": 0, 00:22:54.266 "medium_priority_weight": 0, 00:22:54.266 "high_priority_weight": 0, 00:22:54.267 "nvme_adminq_poll_period_us": 10000, 00:22:54.267 "nvme_ioq_poll_period_us": 0, 00:22:54.267 "io_queue_requests": 0, 00:22:54.267 "delay_cmd_submit": true, 00:22:54.267 "transport_retry_count": 4, 00:22:54.267 "bdev_retry_count": 3, 00:22:54.267 "transport_ack_timeout": 0, 00:22:54.267 "ctrlr_loss_timeout_sec": 0, 00:22:54.267 "reconnect_delay_sec": 0, 00:22:54.267 "fast_io_fail_timeout_sec": 0, 00:22:54.267 "disable_auto_failback": false, 00:22:54.267 "generate_uuids": false, 00:22:54.267 "transport_tos": 0, 00:22:54.267 "nvme_error_stat": false, 00:22:54.267 "rdma_srq_size": 0, 00:22:54.267 "io_path_stat": false, 00:22:54.267 "allow_accel_sequence": false, 00:22:54.267 "rdma_max_cq_size": 0, 00:22:54.267 "rdma_cm_event_timeout_ms": 0, 00:22:54.267 "dhchap_digests": [ 00:22:54.267 "sha256", 00:22:54.267 "sha384", 00:22:54.267 "sha512" 00:22:54.267 ], 00:22:54.267 "dhchap_dhgroups": [ 00:22:54.267 "null", 00:22:54.267 "ffdhe2048", 00:22:54.267 "ffdhe3072", 00:22:54.267 "ffdhe4096", 00:22:54.267 "ffdhe6144", 00:22:54.267 "ffdhe8192" 00:22:54.267 ] 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "bdev_nvme_set_hotplug", 00:22:54.267 "params": { 00:22:54.267 "period_us": 100000, 00:22:54.267 "enable": false 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "bdev_malloc_create", 00:22:54.267 "params": { 00:22:54.267 "name": "malloc0", 00:22:54.267 "num_blocks": 8192, 00:22:54.267 "block_size": 4096, 00:22:54.267 "physical_block_size": 4096, 00:22:54.267 "uuid": "d1ae62d4-9333-4684-9feb-22bd21e4031d", 00:22:54.267 "optimal_io_boundary": 0, 00:22:54.267 "md_size": 0, 00:22:54.267 "dif_type": 0, 00:22:54.267 "dif_is_head_of_md": false, 00:22:54.267 "dif_pi_format": 0 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "bdev_wait_for_examine" 00:22:54.267 } 00:22:54.267 ] 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "subsystem": "nbd", 00:22:54.267 "config": [] 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "subsystem": "scheduler", 00:22:54.267 "config": [ 00:22:54.267 { 00:22:54.267 "method": "framework_set_scheduler", 00:22:54.267 "params": { 00:22:54.267 "name": "static" 00:22:54.267 } 00:22:54.267 } 00:22:54.267 ] 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "subsystem": "nvmf", 00:22:54.267 "config": [ 00:22:54.267 { 00:22:54.267 "method": "nvmf_set_config", 00:22:54.267 "params": { 00:22:54.267 "discovery_filter": "match_any", 00:22:54.267 "admin_cmd_passthru": { 00:22:54.267 "identify_ctrlr": false 00:22:54.267 }, 00:22:54.267 "dhchap_digests": [ 00:22:54.267 "sha256", 00:22:54.267 "sha384", 00:22:54.267 "sha512" 00:22:54.267 ], 00:22:54.267 "dhchap_dhgroups": [ 00:22:54.267 "null", 00:22:54.267 "ffdhe2048", 00:22:54.267 "ffdhe3072", 00:22:54.267 "ffdhe4096", 00:22:54.267 "ffdhe6144", 00:22:54.267 "ffdhe8192" 00:22:54.267 ] 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_set_max_subsystems", 00:22:54.267 "params": { 00:22:54.267 "max_subsystems": 1024 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_set_crdt", 00:22:54.267 "params": { 00:22:54.267 "crdt1": 0, 00:22:54.267 "crdt2": 0, 00:22:54.267 "crdt3": 0 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_create_transport", 00:22:54.267 "params": { 00:22:54.267 "trtype": "TCP", 00:22:54.267 "max_queue_depth": 128, 00:22:54.267 "max_io_qpairs_per_ctrlr": 127, 00:22:54.267 "in_capsule_data_size": 4096, 00:22:54.267 "max_io_size": 131072, 00:22:54.267 "io_unit_size": 131072, 00:22:54.267 "max_aq_depth": 128, 00:22:54.267 "num_shared_buffers": 511, 00:22:54.267 "buf_cache_size": 4294967295, 00:22:54.267 "dif_insert_or_strip": false, 00:22:54.267 "zcopy": false, 00:22:54.267 "c2h_success": false, 00:22:54.267 "sock_priority": 0, 00:22:54.267 "abort_timeout_sec": 1, 00:22:54.267 "ack_timeout": 0, 00:22:54.267 "data_wr_pool_size": 0 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_create_subsystem", 00:22:54.267 "params": { 00:22:54.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.267 "allow_any_host": false, 00:22:54.267 "serial_number": "00000000000000000000", 00:22:54.267 "model_number": "SPDK bdev Controller", 00:22:54.267 "max_namespaces": 32, 00:22:54.267 "min_cntlid": 1, 00:22:54.267 "max_cntlid": 65519, 00:22:54.267 "ana_reporting": false 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_subsystem_add_host", 00:22:54.267 "params": { 00:22:54.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.267 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.267 "psk": "key0" 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_subsystem_add_ns", 00:22:54.267 "params": { 00:22:54.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.267 "namespace": { 00:22:54.267 "nsid": 1, 00:22:54.267 "bdev_name": "malloc0", 00:22:54.267 "nguid": "D1AE62D4933346849FEB22BD21E4031D", 00:22:54.267 "uuid": "d1ae62d4-9333-4684-9feb-22bd21e4031d", 00:22:54.267 "no_auto_visible": false 00:22:54.267 } 00:22:54.267 } 00:22:54.267 }, 00:22:54.267 { 00:22:54.267 "method": "nvmf_subsystem_add_listener", 00:22:54.267 "params": { 00:22:54.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.267 "listen_address": { 00:22:54.267 "trtype": "TCP", 00:22:54.267 "adrfam": "IPv4", 00:22:54.267 "traddr": "10.0.0.2", 00:22:54.267 "trsvcid": "4420" 00:22:54.267 }, 00:22:54.267 "secure_channel": false, 00:22:54.267 "sock_impl": "ssl" 00:22:54.267 } 00:22:54.267 } 00:22:54.267 ] 00:22:54.267 } 00:22:54.267 ] 00:22:54.267 }' 00:22:54.267 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:54.529 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:54.529 "subsystems": [ 00:22:54.529 { 00:22:54.529 "subsystem": "keyring", 00:22:54.529 "config": [ 00:22:54.529 { 00:22:54.529 "method": "keyring_file_add_key", 00:22:54.529 "params": { 00:22:54.529 "name": "key0", 00:22:54.529 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:54.529 } 00:22:54.529 } 00:22:54.529 ] 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "subsystem": "iobuf", 00:22:54.529 "config": [ 00:22:54.529 { 00:22:54.529 "method": "iobuf_set_options", 00:22:54.529 "params": { 00:22:54.529 "small_pool_count": 8192, 00:22:54.529 "large_pool_count": 1024, 00:22:54.529 "small_bufsize": 8192, 00:22:54.529 "large_bufsize": 135168, 00:22:54.529 "enable_numa": false 00:22:54.529 } 00:22:54.529 } 00:22:54.529 ] 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "subsystem": "sock", 00:22:54.529 "config": [ 00:22:54.529 { 00:22:54.529 "method": "sock_set_default_impl", 00:22:54.529 "params": { 00:22:54.529 "impl_name": "posix" 00:22:54.529 } 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "method": "sock_impl_set_options", 00:22:54.529 "params": { 00:22:54.529 "impl_name": "ssl", 00:22:54.529 "recv_buf_size": 4096, 00:22:54.529 "send_buf_size": 4096, 00:22:54.529 "enable_recv_pipe": true, 00:22:54.529 "enable_quickack": false, 00:22:54.529 "enable_placement_id": 0, 00:22:54.529 "enable_zerocopy_send_server": true, 00:22:54.529 "enable_zerocopy_send_client": false, 00:22:54.529 "zerocopy_threshold": 0, 00:22:54.529 "tls_version": 0, 00:22:54.529 "enable_ktls": false 00:22:54.529 } 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "method": "sock_impl_set_options", 00:22:54.529 "params": { 00:22:54.529 "impl_name": "posix", 00:22:54.529 "recv_buf_size": 2097152, 00:22:54.529 "send_buf_size": 2097152, 00:22:54.529 "enable_recv_pipe": true, 00:22:54.529 "enable_quickack": false, 00:22:54.529 "enable_placement_id": 0, 00:22:54.529 "enable_zerocopy_send_server": true, 00:22:54.529 "enable_zerocopy_send_client": false, 00:22:54.529 "zerocopy_threshold": 0, 00:22:54.529 "tls_version": 0, 00:22:54.529 "enable_ktls": false 00:22:54.529 } 00:22:54.529 } 00:22:54.529 ] 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "subsystem": "vmd", 00:22:54.529 "config": [] 00:22:54.529 }, 00:22:54.529 { 00:22:54.529 "subsystem": "accel", 00:22:54.529 "config": [ 00:22:54.529 { 00:22:54.529 "method": "accel_set_options", 00:22:54.529 "params": { 00:22:54.529 "small_cache_size": 128, 00:22:54.529 "large_cache_size": 16, 00:22:54.529 "task_count": 2048, 00:22:54.529 "sequence_count": 2048, 00:22:54.529 "buf_count": 2048 00:22:54.530 } 00:22:54.530 } 00:22:54.530 ] 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "subsystem": "bdev", 00:22:54.530 "config": [ 00:22:54.530 { 00:22:54.530 "method": "bdev_set_options", 00:22:54.530 "params": { 00:22:54.530 "bdev_io_pool_size": 65535, 00:22:54.530 "bdev_io_cache_size": 256, 00:22:54.530 "bdev_auto_examine": true, 00:22:54.530 "iobuf_small_cache_size": 128, 00:22:54.530 "iobuf_large_cache_size": 16 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_raid_set_options", 00:22:54.530 "params": { 00:22:54.530 "process_window_size_kb": 1024, 00:22:54.530 "process_max_bandwidth_mb_sec": 0 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_iscsi_set_options", 00:22:54.530 "params": { 00:22:54.530 "timeout_sec": 30 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_nvme_set_options", 00:22:54.530 "params": { 00:22:54.530 "action_on_timeout": "none", 00:22:54.530 "timeout_us": 0, 00:22:54.530 "timeout_admin_us": 0, 00:22:54.530 "keep_alive_timeout_ms": 10000, 00:22:54.530 "arbitration_burst": 0, 00:22:54.530 "low_priority_weight": 0, 00:22:54.530 "medium_priority_weight": 0, 00:22:54.530 "high_priority_weight": 0, 00:22:54.530 "nvme_adminq_poll_period_us": 10000, 00:22:54.530 "nvme_ioq_poll_period_us": 0, 00:22:54.530 "io_queue_requests": 512, 00:22:54.530 "delay_cmd_submit": true, 00:22:54.530 "transport_retry_count": 4, 00:22:54.530 "bdev_retry_count": 3, 00:22:54.530 "transport_ack_timeout": 0, 00:22:54.530 "ctrlr_loss_timeout_sec": 0, 00:22:54.530 "reconnect_delay_sec": 0, 00:22:54.530 "fast_io_fail_timeout_sec": 0, 00:22:54.530 "disable_auto_failback": false, 00:22:54.530 "generate_uuids": false, 00:22:54.530 "transport_tos": 0, 00:22:54.530 "nvme_error_stat": false, 00:22:54.530 "rdma_srq_size": 0, 00:22:54.530 "io_path_stat": false, 00:22:54.530 "allow_accel_sequence": false, 00:22:54.530 "rdma_max_cq_size": 0, 00:22:54.530 "rdma_cm_event_timeout_ms": 0, 00:22:54.530 "dhchap_digests": [ 00:22:54.530 "sha256", 00:22:54.530 "sha384", 00:22:54.530 "sha512" 00:22:54.530 ], 00:22:54.530 "dhchap_dhgroups": [ 00:22:54.530 "null", 00:22:54.530 "ffdhe2048", 00:22:54.530 "ffdhe3072", 00:22:54.530 "ffdhe4096", 00:22:54.530 "ffdhe6144", 00:22:54.530 "ffdhe8192" 00:22:54.530 ] 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_nvme_attach_controller", 00:22:54.530 "params": { 00:22:54.530 "name": "nvme0", 00:22:54.530 "trtype": "TCP", 00:22:54.530 "adrfam": "IPv4", 00:22:54.530 "traddr": "10.0.0.2", 00:22:54.530 "trsvcid": "4420", 00:22:54.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.530 "prchk_reftag": false, 00:22:54.530 "prchk_guard": false, 00:22:54.530 "ctrlr_loss_timeout_sec": 0, 00:22:54.530 "reconnect_delay_sec": 0, 00:22:54.530 "fast_io_fail_timeout_sec": 0, 00:22:54.530 "psk": "key0", 00:22:54.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.530 "hdgst": false, 00:22:54.530 "ddgst": false, 00:22:54.530 "multipath": "multipath" 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_nvme_set_hotplug", 00:22:54.530 "params": { 00:22:54.530 "period_us": 100000, 00:22:54.530 "enable": false 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_enable_histogram", 00:22:54.530 "params": { 00:22:54.530 "name": "nvme0n1", 00:22:54.530 "enable": true 00:22:54.530 } 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "method": "bdev_wait_for_examine" 00:22:54.530 } 00:22:54.530 ] 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "subsystem": "nbd", 00:22:54.530 "config": [] 00:22:54.530 } 00:22:54.530 ] 00:22:54.530 }' 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1484294 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1484294 ']' 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1484294 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1484294 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1484294' 00:22:54.530 killing process with pid 1484294 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1484294 00:22:54.530 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.530 00:22:54.530 Latency(us) 00:22:54.530 [2024-11-26T06:32:22.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.530 [2024-11-26T06:32:22.628Z] =================================================================================================================== 00:22:54.530 [2024-11-26T06:32:22.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.530 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1484294 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1483969 ']' 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1483969' 00:22:54.792 killing process with pid 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1483969 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.792 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:54.793 "subsystems": [ 00:22:54.793 { 00:22:54.793 "subsystem": "keyring", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "keyring_file_add_key", 00:22:54.793 "params": { 00:22:54.793 "name": "key0", 00:22:54.793 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:54.793 } 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "iobuf", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "iobuf_set_options", 00:22:54.793 "params": { 00:22:54.793 "small_pool_count": 8192, 00:22:54.793 "large_pool_count": 1024, 00:22:54.793 "small_bufsize": 8192, 00:22:54.793 "large_bufsize": 135168, 00:22:54.793 "enable_numa": false 00:22:54.793 } 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "sock", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "sock_set_default_impl", 00:22:54.793 "params": { 00:22:54.793 "impl_name": "posix" 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "sock_impl_set_options", 00:22:54.793 "params": { 00:22:54.793 "impl_name": "ssl", 00:22:54.793 "recv_buf_size": 4096, 00:22:54.793 "send_buf_size": 4096, 00:22:54.793 "enable_recv_pipe": true, 00:22:54.793 "enable_quickack": false, 00:22:54.793 "enable_placement_id": 0, 00:22:54.793 "enable_zerocopy_send_server": true, 00:22:54.793 "enable_zerocopy_send_client": false, 00:22:54.793 "zerocopy_threshold": 0, 00:22:54.793 "tls_version": 0, 00:22:54.793 "enable_ktls": false 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "sock_impl_set_options", 00:22:54.793 "params": { 00:22:54.793 "impl_name": "posix", 00:22:54.793 "recv_buf_size": 2097152, 00:22:54.793 "send_buf_size": 2097152, 00:22:54.793 "enable_recv_pipe": true, 00:22:54.793 "enable_quickack": false, 00:22:54.793 "enable_placement_id": 0, 00:22:54.793 "enable_zerocopy_send_server": true, 00:22:54.793 "enable_zerocopy_send_client": false, 00:22:54.793 "zerocopy_threshold": 0, 00:22:54.793 "tls_version": 0, 00:22:54.793 "enable_ktls": false 00:22:54.793 } 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "vmd", 00:22:54.793 "config": [] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "accel", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "accel_set_options", 00:22:54.793 "params": { 00:22:54.793 "small_cache_size": 128, 00:22:54.793 "large_cache_size": 16, 00:22:54.793 "task_count": 2048, 00:22:54.793 "sequence_count": 2048, 00:22:54.793 "buf_count": 2048 00:22:54.793 } 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "bdev", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "bdev_set_options", 00:22:54.793 "params": { 00:22:54.793 "bdev_io_pool_size": 65535, 00:22:54.793 "bdev_io_cache_size": 256, 00:22:54.793 "bdev_auto_examine": true, 00:22:54.793 "iobuf_small_cache_size": 128, 00:22:54.793 "iobuf_large_cache_size": 16 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_raid_set_options", 00:22:54.793 "params": { 00:22:54.793 "process_window_size_kb": 1024, 00:22:54.793 "process_max_bandwidth_mb_sec": 0 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_iscsi_set_options", 00:22:54.793 "params": { 00:22:54.793 "timeout_sec": 30 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_nvme_set_options", 00:22:54.793 "params": { 00:22:54.793 "action_on_timeout": "none", 00:22:54.793 "timeout_us": 0, 00:22:54.793 "timeout_admin_us": 0, 00:22:54.793 "keep_alive_timeout_ms": 10000, 00:22:54.793 "arbitration_burst": 0, 00:22:54.793 "low_priority_weight": 0, 00:22:54.793 "medium_priority_weight": 0, 00:22:54.793 "high_priority_weight": 0, 00:22:54.793 "nvme_adminq_poll_period_us": 10000, 00:22:54.793 "nvme_ioq_poll_period_us": 0, 00:22:54.793 "io_queue_requests": 0, 00:22:54.793 "delay_cmd_submit": true, 00:22:54.793 "transport_retry_count": 4, 00:22:54.793 "bdev_retry_count": 3, 00:22:54.793 "transport_ack_timeout": 0, 00:22:54.793 "ctrlr_loss_timeout_sec": 0, 00:22:54.793 "reconnect_delay_sec": 0, 00:22:54.793 "fast_io_fail_timeout_sec": 0, 00:22:54.793 "disable_auto_failback": false, 00:22:54.793 "generate_uuids": false, 00:22:54.793 "transport_tos": 0, 00:22:54.793 "nvme_error_stat": false, 00:22:54.793 "rdma_srq_size": 0, 00:22:54.793 "io_path_stat": false, 00:22:54.793 "allow_accel_sequence": false, 00:22:54.793 "rdma_max_cq_size": 0, 00:22:54.793 "rdma_cm_event_timeout_ms": 0, 00:22:54.793 "dhchap_digests": [ 00:22:54.793 "sha256", 00:22:54.793 "sha384", 00:22:54.793 "sha512" 00:22:54.793 ], 00:22:54.793 "dhchap_dhgroups": [ 00:22:54.793 "null", 00:22:54.793 "ffdhe2048", 00:22:54.793 "ffdhe3072", 00:22:54.793 "ffdhe4096", 00:22:54.793 "ffdhe6144", 00:22:54.793 "ffdhe8192" 00:22:54.793 ] 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_nvme_set_hotplug", 00:22:54.793 "params": { 00:22:54.793 "period_us": 100000, 00:22:54.793 "enable": false 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_malloc_create", 00:22:54.793 "params": { 00:22:54.793 "name": "malloc0", 00:22:54.793 "num_blocks": 8192, 00:22:54.793 "block_size": 4096, 00:22:54.793 "physical_block_size": 4096, 00:22:54.793 "uuid": "d1ae62d4-9333-4684-9feb-22bd21e4031d", 00:22:54.793 "optimal_io_boundary": 0, 00:22:54.793 "md_size": 0, 00:22:54.793 "dif_type": 0, 00:22:54.793 "dif_is_head_of_md": false, 00:22:54.793 "dif_pi_format": 0 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "bdev_wait_for_examine" 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "nbd", 00:22:54.793 "config": [] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "scheduler", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "framework_set_scheduler", 00:22:54.793 "params": { 00:22:54.793 "name": "static" 00:22:54.793 } 00:22:54.793 } 00:22:54.793 ] 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "subsystem": "nvmf", 00:22:54.793 "config": [ 00:22:54.793 { 00:22:54.793 "method": "nvmf_set_config", 00:22:54.793 "params": { 00:22:54.793 "discovery_filter": "match_any", 00:22:54.793 "admin_cmd_passthru": { 00:22:54.793 "identify_ctrlr": false 00:22:54.793 }, 00:22:54.793 "dhchap_digests": [ 00:22:54.793 "sha256", 00:22:54.793 "sha384", 00:22:54.793 "sha512" 00:22:54.793 ], 00:22:54.793 "dhchap_dhgroups": [ 00:22:54.793 "null", 00:22:54.793 "ffdhe2048", 00:22:54.793 "ffdhe3072", 00:22:54.793 "ffdhe4096", 00:22:54.793 "ffdhe6144", 00:22:54.793 "ffdhe8192" 00:22:54.793 ] 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "nvmf_set_max_subsystems", 00:22:54.793 "params": { 00:22:54.793 "max_subsystems": 1024 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "nvmf_set_crdt", 00:22:54.793 "params": { 00:22:54.793 "crdt1": 0, 00:22:54.793 "crdt2": 0, 00:22:54.793 "crdt3": 0 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "nvmf_create_transport", 00:22:54.793 "params": { 00:22:54.793 "trtype": "TCP", 00:22:54.793 "max_queue_depth": 128, 00:22:54.793 "max_io_qpairs_per_ctrlr": 127, 00:22:54.793 "in_capsule_data_size": 4096, 00:22:54.793 "max_io_size": 131072, 00:22:54.793 "io_unit_size": 131072, 00:22:54.793 "max_aq_depth": 128, 00:22:54.793 "num_shared_buffers": 511, 00:22:54.793 "buf_cache_size": 4294967295, 00:22:54.793 "dif_insert_or_strip": false, 00:22:54.793 "zcopy": false, 00:22:54.793 "c2h_success": false, 00:22:54.793 "sock_priority": 0, 00:22:54.793 "abort_timeout_sec": 1, 00:22:54.793 "ack_timeout": 0, 00:22:54.793 "data_wr_pool_size": 0 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "nvmf_create_subsystem", 00:22:54.793 "params": { 00:22:54.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.793 "allow_any_host": false, 00:22:54.793 "serial_number": "00000000000000000000", 00:22:54.793 "model_number": "SPDK bdev Controller", 00:22:54.793 "max_namespaces": 32, 00:22:54.793 "min_cntlid": 1, 00:22:54.793 "max_cntlid": 65519, 00:22:54.793 "ana_reporting": false 00:22:54.793 } 00:22:54.793 }, 00:22:54.793 { 00:22:54.793 "method": "nvmf_subsystem_add_host", 00:22:54.793 "params": { 00:22:54.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.793 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.793 "psk": "key0" 00:22:54.793 } 00:22:54.793 }, 00:22:54.794 { 00:22:54.794 "method": "nvmf_subsystem_add_ns", 00:22:54.794 "params": { 00:22:54.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.794 "namespace": { 00:22:54.794 "nsid": 1, 00:22:54.794 "bdev_name": "malloc0", 00:22:54.794 "nguid": "D1AE62D4933346849FEB22BD21E4031D", 00:22:54.794 "uuid": "d1ae62d4-9333-4684-9feb-22bd21e4031d", 00:22:54.794 "no_auto_visible": false 00:22:54.794 } 00:22:54.794 } 00:22:54.794 }, 00:22:54.794 { 00:22:54.794 "method": "nvmf_subsystem_add_listener", 00:22:54.794 "params": { 00:22:54.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.794 "listen_address": { 00:22:54.794 "trtype": "TCP", 00:22:54.794 "adrfam": "IPv4", 00:22:54.794 "traddr": "10.0.0.2", 00:22:54.794 "trsvcid": "4420" 00:22:54.794 }, 00:22:54.794 "secure_channel": false, 00:22:54.794 "sock_impl": "ssl" 00:22:54.794 } 00:22:54.794 } 00:22:54.794 ] 00:22:54.794 } 00:22:54.794 ] 00:22:54.794 }' 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1484966 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1484966 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1484966 ']' 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.794 07:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.055 [2024-11-26 07:32:22.893120] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:55.055 [2024-11-26 07:32:22.893184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.055 [2024-11-26 07:32:22.982084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.056 [2024-11-26 07:32:23.011312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.056 [2024-11-26 07:32:23.011340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.056 [2024-11-26 07:32:23.011346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.056 [2024-11-26 07:32:23.011351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.056 [2024-11-26 07:32:23.011355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.056 [2024-11-26 07:32:23.011832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.317 [2024-11-26 07:32:23.204831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.317 [2024-11-26 07:32:23.236864] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.317 [2024-11-26 07:32:23.237081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1485010 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1485010 /var/tmp/bdevperf.sock 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1485010 ']' 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.889 07:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:55.889 "subsystems": [ 00:22:55.889 { 00:22:55.889 "subsystem": "keyring", 00:22:55.889 "config": [ 00:22:55.889 { 00:22:55.889 "method": "keyring_file_add_key", 00:22:55.889 "params": { 00:22:55.889 "name": "key0", 00:22:55.889 "path": "/tmp/tmp.U5eqNhvrHB" 00:22:55.889 } 00:22:55.889 } 00:22:55.889 ] 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "subsystem": "iobuf", 00:22:55.889 "config": [ 00:22:55.889 { 00:22:55.889 "method": "iobuf_set_options", 00:22:55.889 "params": { 00:22:55.889 "small_pool_count": 8192, 00:22:55.889 "large_pool_count": 1024, 00:22:55.889 "small_bufsize": 8192, 00:22:55.889 "large_bufsize": 135168, 00:22:55.889 "enable_numa": false 00:22:55.889 } 00:22:55.889 } 00:22:55.889 ] 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "subsystem": "sock", 00:22:55.889 "config": [ 00:22:55.889 { 00:22:55.889 "method": "sock_set_default_impl", 00:22:55.889 "params": { 00:22:55.889 "impl_name": "posix" 00:22:55.889 } 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "method": "sock_impl_set_options", 00:22:55.889 "params": { 00:22:55.889 "impl_name": "ssl", 00:22:55.889 "recv_buf_size": 4096, 00:22:55.889 "send_buf_size": 4096, 00:22:55.889 "enable_recv_pipe": true, 00:22:55.889 "enable_quickack": false, 00:22:55.889 "enable_placement_id": 0, 00:22:55.889 "enable_zerocopy_send_server": true, 00:22:55.889 "enable_zerocopy_send_client": false, 00:22:55.889 "zerocopy_threshold": 0, 00:22:55.889 "tls_version": 0, 00:22:55.889 "enable_ktls": false 00:22:55.889 } 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "method": "sock_impl_set_options", 00:22:55.889 "params": { 00:22:55.889 "impl_name": "posix", 00:22:55.889 "recv_buf_size": 2097152, 00:22:55.889 "send_buf_size": 2097152, 00:22:55.889 "enable_recv_pipe": true, 00:22:55.889 "enable_quickack": false, 00:22:55.889 "enable_placement_id": 0, 00:22:55.889 "enable_zerocopy_send_server": true, 00:22:55.889 "enable_zerocopy_send_client": false, 00:22:55.889 "zerocopy_threshold": 0, 00:22:55.889 "tls_version": 0, 00:22:55.889 "enable_ktls": false 00:22:55.889 } 00:22:55.889 } 00:22:55.889 ] 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "subsystem": "vmd", 00:22:55.889 "config": [] 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "subsystem": "accel", 00:22:55.889 "config": [ 00:22:55.889 { 00:22:55.889 "method": "accel_set_options", 00:22:55.889 "params": { 00:22:55.889 "small_cache_size": 128, 00:22:55.889 "large_cache_size": 16, 00:22:55.889 "task_count": 2048, 00:22:55.889 "sequence_count": 2048, 00:22:55.889 "buf_count": 2048 00:22:55.889 } 00:22:55.889 } 00:22:55.889 ] 00:22:55.889 }, 00:22:55.889 { 00:22:55.889 "subsystem": "bdev", 00:22:55.889 "config": [ 00:22:55.889 { 00:22:55.889 "method": "bdev_set_options", 00:22:55.889 "params": { 00:22:55.889 "bdev_io_pool_size": 65535, 00:22:55.889 "bdev_io_cache_size": 256, 00:22:55.889 "bdev_auto_examine": true, 00:22:55.890 "iobuf_small_cache_size": 128, 00:22:55.890 "iobuf_large_cache_size": 16 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_raid_set_options", 00:22:55.890 "params": { 00:22:55.890 "process_window_size_kb": 1024, 00:22:55.890 "process_max_bandwidth_mb_sec": 0 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_iscsi_set_options", 00:22:55.890 "params": { 00:22:55.890 "timeout_sec": 30 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_nvme_set_options", 00:22:55.890 "params": { 00:22:55.890 "action_on_timeout": "none", 00:22:55.890 "timeout_us": 0, 00:22:55.890 "timeout_admin_us": 0, 00:22:55.890 "keep_alive_timeout_ms": 10000, 00:22:55.890 "arbitration_burst": 0, 00:22:55.890 "low_priority_weight": 0, 00:22:55.890 "medium_priority_weight": 0, 00:22:55.890 "high_priority_weight": 0, 00:22:55.890 "nvme_adminq_poll_period_us": 10000, 00:22:55.890 "nvme_ioq_poll_period_us": 0, 00:22:55.890 "io_queue_requests": 512, 00:22:55.890 "delay_cmd_submit": true, 00:22:55.890 "transport_retry_count": 4, 00:22:55.890 "bdev_retry_count": 3, 00:22:55.890 "transport_ack_timeout": 0, 00:22:55.890 "ctrlr_loss_timeout_sec": 0, 00:22:55.890 "reconnect_delay_sec": 0, 00:22:55.890 "fast_io_fail_timeout_sec": 0, 00:22:55.890 "disable_auto_failback": false, 00:22:55.890 "generate_uuids": false, 00:22:55.890 "transport_tos": 0, 00:22:55.890 "nvme_error_stat": false, 00:22:55.890 "rdma_srq_size": 0, 00:22:55.890 "io_path_stat": false, 00:22:55.890 "allow_accel_sequence": false, 00:22:55.890 "rdma_max_cq_size": 0, 00:22:55.890 "rdma_cm_event_timeout_ms": 0, 00:22:55.890 "dhchap_digests": [ 00:22:55.890 "sha256", 00:22:55.890 "sha384", 00:22:55.890 "sha512" 00:22:55.890 ], 00:22:55.890 "dhchap_dhgroups": [ 00:22:55.890 "null", 00:22:55.890 "ffdhe2048", 00:22:55.890 "ffdhe3072", 00:22:55.890 "ffdhe4096", 00:22:55.890 "ffdhe6144", 00:22:55.890 "ffdhe8192" 00:22:55.890 ] 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_nvme_attach_controller", 00:22:55.890 "params": { 00:22:55.890 "name": "nvme0", 00:22:55.890 "trtype": "TCP", 00:22:55.890 "adrfam": "IPv4", 00:22:55.890 "traddr": "10.0.0.2", 00:22:55.890 "trsvcid": "4420", 00:22:55.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.890 "prchk_reftag": false, 00:22:55.890 "prchk_guard": false, 00:22:55.890 "ctrlr_loss_timeout_sec": 0, 00:22:55.890 "reconnect_delay_sec": 0, 00:22:55.890 "fast_io_fail_timeout_sec": 0, 00:22:55.890 "psk": "key0", 00:22:55.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:55.890 "hdgst": false, 00:22:55.890 "ddgst": false, 00:22:55.890 "multipath": "multipath" 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_nvme_set_hotplug", 00:22:55.890 "params": { 00:22:55.890 "period_us": 100000, 00:22:55.890 "enable": false 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_enable_histogram", 00:22:55.890 "params": { 00:22:55.890 "name": "nvme0n1", 00:22:55.890 "enable": true 00:22:55.890 } 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "method": "bdev_wait_for_examine" 00:22:55.890 } 00:22:55.890 ] 00:22:55.890 }, 00:22:55.890 { 00:22:55.890 "subsystem": "nbd", 00:22:55.890 "config": [] 00:22:55.890 } 00:22:55.890 ] 00:22:55.890 }' 00:22:55.890 [2024-11-26 07:32:23.779052] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:22:55.890 [2024-11-26 07:32:23.779102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485010 ] 00:22:55.890 [2024-11-26 07:32:23.859884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.890 [2024-11-26 07:32:23.889863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.151 [2024-11-26 07:32:24.024772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.724 07:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.724 Running I/O for 1 seconds... 00:22:58.131 4624.00 IOPS, 18.06 MiB/s 00:22:58.131 Latency(us) 00:22:58.131 [2024-11-26T06:32:26.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.131 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:58.131 Verification LBA range: start 0x0 length 0x2000 00:22:58.131 nvme0n1 : 1.01 4690.18 18.32 0.00 0.00 27132.21 4696.75 76021.76 00:22:58.131 [2024-11-26T06:32:26.229Z] =================================================================================================================== 00:22:58.131 [2024-11-26T06:32:26.229Z] Total : 4690.18 18.32 0.00 0.00 27132.21 4696.75 76021.76 00:22:58.131 { 00:22:58.131 "results": [ 00:22:58.131 { 00:22:58.131 "job": "nvme0n1", 00:22:58.131 "core_mask": "0x2", 00:22:58.131 "workload": "verify", 00:22:58.131 "status": "finished", 00:22:58.131 "verify_range": { 00:22:58.131 "start": 0, 00:22:58.131 "length": 8192 00:22:58.131 }, 00:22:58.131 "queue_depth": 128, 00:22:58.131 "io_size": 4096, 00:22:58.131 "runtime": 1.013394, 00:22:58.131 "iops": 4690.179732660742, 00:22:58.131 "mibps": 18.321014580706024, 00:22:58.131 "io_failed": 0, 00:22:58.131 "io_timeout": 0, 00:22:58.131 "avg_latency_us": 27132.212514201557, 00:22:58.131 "min_latency_us": 4696.746666666667, 00:22:58.131 "max_latency_us": 76021.76 00:22:58.131 } 00:22:58.131 ], 00:22:58.131 "core_count": 1 00:22:58.131 } 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:58.131 nvmf_trace.0 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1485010 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1485010 ']' 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1485010 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.131 07:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485010 00:22:58.131 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:58.131 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:58.131 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485010' 00:22:58.131 killing process with pid 1485010 00:22:58.131 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1485010 00:22:58.132 Received shutdown signal, test time was about 1.000000 seconds 00:22:58.132 00:22:58.132 Latency(us) 00:22:58.132 [2024-11-26T06:32:26.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.132 [2024-11-26T06:32:26.230Z] =================================================================================================================== 00:22:58.132 [2024-11-26T06:32:26.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1485010 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.132 rmmod nvme_tcp 00:22:58.132 rmmod nvme_fabrics 00:22:58.132 rmmod nvme_keyring 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1484966 ']' 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1484966 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1484966 ']' 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1484966 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.132 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1484966 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1484966' 00:22:58.392 killing process with pid 1484966 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1484966 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1484966 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.392 07:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hUgClmjeL0 /tmp/tmp.332M6t1Dom /tmp/tmp.U5eqNhvrHB 00:23:00.938 00:23:00.938 real 1m28.951s 00:23:00.938 user 2m21.253s 00:23:00.938 sys 0m27.017s 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.938 ************************************ 00:23:00.938 END TEST nvmf_tls 00:23:00.938 ************************************ 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:00.938 ************************************ 00:23:00.938 START TEST nvmf_fips 00:23:00.938 ************************************ 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:00.938 * Looking for test storage... 00:23:00.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.938 --rc genhtml_branch_coverage=1 00:23:00.938 --rc genhtml_function_coverage=1 00:23:00.938 --rc genhtml_legend=1 00:23:00.938 --rc geninfo_all_blocks=1 00:23:00.938 --rc geninfo_unexecuted_blocks=1 00:23:00.938 00:23:00.938 ' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.938 --rc genhtml_branch_coverage=1 00:23:00.938 --rc genhtml_function_coverage=1 00:23:00.938 --rc genhtml_legend=1 00:23:00.938 --rc geninfo_all_blocks=1 00:23:00.938 --rc geninfo_unexecuted_blocks=1 00:23:00.938 00:23:00.938 ' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.938 --rc genhtml_branch_coverage=1 00:23:00.938 --rc genhtml_function_coverage=1 00:23:00.938 --rc genhtml_legend=1 00:23:00.938 --rc geninfo_all_blocks=1 00:23:00.938 --rc geninfo_unexecuted_blocks=1 00:23:00.938 00:23:00.938 ' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.938 --rc genhtml_branch_coverage=1 00:23:00.938 --rc genhtml_function_coverage=1 00:23:00.938 --rc genhtml_legend=1 00:23:00.938 --rc geninfo_all_blocks=1 00:23:00.938 --rc geninfo_unexecuted_blocks=1 00:23:00.938 00:23:00.938 ' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.938 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:00.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:00.939 Error setting digest 00:23:00.939 4052FECEA07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:00.939 4052FECEA07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:00.939 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:00.940 07:32:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.087 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.088 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.088 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.088 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:23:09.089 00:23:09.089 --- 10.0.0.2 ping statistics --- 00:23:09.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.089 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:09.089 00:23:09.089 --- 10.0.0.1 ping statistics --- 00:23:09.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.089 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1489727 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1489727 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1489727 ']' 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.089 07:32:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:09.089 [2024-11-26 07:32:36.537624] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:23:09.089 [2024-11-26 07:32:36.537697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.089 [2024-11-26 07:32:36.636998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.089 [2024-11-26 07:32:36.688007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.089 [2024-11-26 07:32:36.688055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.089 [2024-11-26 07:32:36.688063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.089 [2024-11-26 07:32:36.688071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.089 [2024-11-26 07:32:36.688077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.089 [2024-11-26 07:32:36.688870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.a8y 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.a8y 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.a8y 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.a8y 00:23:09.350 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:09.611 [2024-11-26 07:32:37.552500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.611 [2024-11-26 07:32:37.568498] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.611 [2024-11-26 07:32:37.568818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.611 malloc0 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1490064 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1490064 /var/tmp/bdevperf.sock 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1490064 ']' 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.611 07:32:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:09.872 [2024-11-26 07:32:37.710694] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:23:09.872 [2024-11-26 07:32:37.710772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490064 ] 00:23:09.872 [2024-11-26 07:32:37.804215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.872 [2024-11-26 07:32:37.855043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.444 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.444 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:10.444 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.a8y 00:23:10.706 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:10.966 [2024-11-26 07:32:38.853243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.966 TLSTESTn1 00:23:10.966 07:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.966 Running I/O for 10 seconds... 00:23:13.293 5212.00 IOPS, 20.36 MiB/s [2024-11-26T06:32:42.333Z] 5190.00 IOPS, 20.27 MiB/s [2024-11-26T06:32:43.276Z] 5250.33 IOPS, 20.51 MiB/s [2024-11-26T06:32:44.218Z] 5488.00 IOPS, 21.44 MiB/s [2024-11-26T06:32:45.160Z] 5622.00 IOPS, 21.96 MiB/s [2024-11-26T06:32:46.102Z] 5566.83 IOPS, 21.75 MiB/s [2024-11-26T06:32:47.486Z] 5510.43 IOPS, 21.53 MiB/s [2024-11-26T06:32:48.429Z] 5554.62 IOPS, 21.70 MiB/s [2024-11-26T06:32:49.371Z] 5615.44 IOPS, 21.94 MiB/s [2024-11-26T06:32:49.371Z] 5599.20 IOPS, 21.87 MiB/s 00:23:21.273 Latency(us) 00:23:21.273 [2024-11-26T06:32:49.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.273 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.273 Verification LBA range: start 0x0 length 0x2000 00:23:21.273 TLSTESTn1 : 10.02 5599.28 21.87 0.00 0.00 22819.08 5734.40 110537.39 00:23:21.273 [2024-11-26T06:32:49.371Z] =================================================================================================================== 00:23:21.273 [2024-11-26T06:32:49.371Z] Total : 5599.28 21.87 0.00 0.00 22819.08 5734.40 110537.39 00:23:21.273 { 00:23:21.273 "results": [ 00:23:21.273 { 00:23:21.273 "job": "TLSTESTn1", 00:23:21.273 "core_mask": "0x4", 00:23:21.273 "workload": "verify", 00:23:21.273 "status": "finished", 00:23:21.273 "verify_range": { 00:23:21.273 "start": 0, 00:23:21.273 "length": 8192 00:23:21.273 }, 00:23:21.273 "queue_depth": 128, 00:23:21.273 "io_size": 4096, 00:23:21.273 "runtime": 10.022726, 00:23:21.273 "iops": 5599.275087436292, 00:23:21.273 "mibps": 21.872168310298015, 00:23:21.273 "io_failed": 0, 00:23:21.273 "io_timeout": 0, 00:23:21.273 "avg_latency_us": 22819.08021097648, 00:23:21.273 "min_latency_us": 5734.4, 00:23:21.273 "max_latency_us": 110537.38666666667 00:23:21.273 } 00:23:21.273 ], 00:23:21.273 "core_count": 1 00:23:21.273 } 00:23:21.273 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:21.274 nvmf_trace.0 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1490064 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1490064 ']' 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1490064 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1490064 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1490064' 00:23:21.274 killing process with pid 1490064 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1490064 00:23:21.274 Received shutdown signal, test time was about 10.000000 seconds 00:23:21.274 00:23:21.274 Latency(us) 00:23:21.274 [2024-11-26T06:32:49.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.274 [2024-11-26T06:32:49.372Z] =================================================================================================================== 00:23:21.274 [2024-11-26T06:32:49.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.274 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1490064 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.534 rmmod nvme_tcp 00:23:21.534 rmmod nvme_fabrics 00:23:21.534 rmmod nvme_keyring 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1489727 ']' 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1489727 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1489727 ']' 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1489727 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1489727 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1489727' 00:23:21.534 killing process with pid 1489727 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1489727 00:23:21.534 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1489727 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:21.795 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.796 07:32:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.a8y 00:23:23.709 00:23:23.709 real 0m23.203s 00:23:23.709 user 0m24.882s 00:23:23.709 sys 0m9.632s 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:23.709 ************************************ 00:23:23.709 END TEST nvmf_fips 00:23:23.709 ************************************ 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.709 07:32:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.971 ************************************ 00:23:23.971 START TEST nvmf_control_msg_list 00:23:23.971 ************************************ 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:23.971 * Looking for test storage... 00:23:23.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.971 07:32:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.971 --rc genhtml_branch_coverage=1 00:23:23.971 --rc genhtml_function_coverage=1 00:23:23.971 --rc genhtml_legend=1 00:23:23.971 --rc geninfo_all_blocks=1 00:23:23.971 --rc geninfo_unexecuted_blocks=1 00:23:23.971 00:23:23.971 ' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.971 --rc genhtml_branch_coverage=1 00:23:23.971 --rc genhtml_function_coverage=1 00:23:23.971 --rc genhtml_legend=1 00:23:23.971 --rc geninfo_all_blocks=1 00:23:23.971 --rc geninfo_unexecuted_blocks=1 00:23:23.971 00:23:23.971 ' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.971 --rc genhtml_branch_coverage=1 00:23:23.971 --rc genhtml_function_coverage=1 00:23:23.971 --rc genhtml_legend=1 00:23:23.971 --rc geninfo_all_blocks=1 00:23:23.971 --rc geninfo_unexecuted_blocks=1 00:23:23.971 00:23:23.971 ' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.971 --rc genhtml_branch_coverage=1 00:23:23.971 --rc genhtml_function_coverage=1 00:23:23.971 --rc genhtml_legend=1 00:23:23.971 --rc geninfo_all_blocks=1 00:23:23.971 --rc geninfo_unexecuted_blocks=1 00:23:23.971 00:23:23.971 ' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.971 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.972 07:32:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:23:32.113 00:23:32.113 --- 10.0.0.2 ping statistics --- 00:23:32.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.113 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:32.113 00:23:32.113 --- 10.0.0.1 ping statistics --- 00:23:32.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.113 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1496439 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1496439 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1496439 ']' 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.113 07:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.113 [2024-11-26 07:32:59.604830] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:23:32.113 [2024-11-26 07:32:59.604896] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.113 [2024-11-26 07:32:59.706051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.113 [2024-11-26 07:32:59.757643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.113 [2024-11-26 07:32:59.757699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.113 [2024-11-26 07:32:59.757708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.113 [2024-11-26 07:32:59.757715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.113 [2024-11-26 07:32:59.757721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.113 [2024-11-26 07:32:59.758526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.374 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.374 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:23:32.374 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.374 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.374 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 [2024-11-26 07:33:00.485155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 Malloc0 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:32.637 [2024-11-26 07:33:00.539795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1496797 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1496798 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1496799 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1496797 00:23:32.637 07:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:32.637 [2024-11-26 07:33:00.640731] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:32.637 [2024-11-26 07:33:00.640988] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:32.637 [2024-11-26 07:33:00.641289] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:34.028 Initializing NVMe Controllers 00:23:34.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:34.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:34.028 Initialization complete. Launching workers. 00:23:34.028 ======================================================== 00:23:34.028 Latency(us) 00:23:34.028 Device Information : IOPS MiB/s Average min max 00:23:34.028 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40897.75 40734.44 40988.01 00:23:34.028 ======================================================== 00:23:34.028 Total : 25.00 0.10 40897.75 40734.44 40988.01 00:23:34.028 00:23:34.028 Initializing NVMe Controllers 00:23:34.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:34.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:34.028 Initialization complete. Launching workers. 00:23:34.028 ======================================================== 00:23:34.028 Latency(us) 00:23:34.028 Device Information : IOPS MiB/s Average min max 00:23:34.028 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2263.00 8.84 441.66 146.69 709.56 00:23:34.028 ======================================================== 00:23:34.028 Total : 2263.00 8.84 441.66 146.69 709.56 00:23:34.028 00:23:34.028 Initializing NVMe Controllers 00:23:34.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:34.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:34.028 Initialization complete. Launching workers. 00:23:34.028 ======================================================== 00:23:34.028 Latency(us) 00:23:34.028 Device Information : IOPS MiB/s Average min max 00:23:34.028 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40923.73 40794.34 41363.20 00:23:34.028 ======================================================== 00:23:34.028 Total : 25.00 0.10 40923.73 40794.34 41363.20 00:23:34.028 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1496798 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1496799 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:34.028 rmmod nvme_tcp 00:23:34.028 rmmod nvme_fabrics 00:23:34.028 rmmod nvme_keyring 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1496439 ']' 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1496439 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1496439 ']' 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1496439 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1496439 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1496439' 00:23:34.028 killing process with pid 1496439 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1496439 00:23:34.028 07:33:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1496439 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.290 07:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.206 00:23:36.206 real 0m12.424s 00:23:36.206 user 0m7.921s 00:23:36.206 sys 0m6.631s 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:36.206 ************************************ 00:23:36.206 END TEST nvmf_control_msg_list 00:23:36.206 ************************************ 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.206 07:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:36.469 ************************************ 00:23:36.469 START TEST nvmf_wait_for_buf 00:23:36.469 ************************************ 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:36.469 * Looking for test storage... 00:23:36.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.469 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.470 --rc genhtml_branch_coverage=1 00:23:36.470 --rc genhtml_function_coverage=1 00:23:36.470 --rc genhtml_legend=1 00:23:36.470 --rc geninfo_all_blocks=1 00:23:36.470 --rc geninfo_unexecuted_blocks=1 00:23:36.470 00:23:36.470 ' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.470 --rc genhtml_branch_coverage=1 00:23:36.470 --rc genhtml_function_coverage=1 00:23:36.470 --rc genhtml_legend=1 00:23:36.470 --rc geninfo_all_blocks=1 00:23:36.470 --rc geninfo_unexecuted_blocks=1 00:23:36.470 00:23:36.470 ' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.470 --rc genhtml_branch_coverage=1 00:23:36.470 --rc genhtml_function_coverage=1 00:23:36.470 --rc genhtml_legend=1 00:23:36.470 --rc geninfo_all_blocks=1 00:23:36.470 --rc geninfo_unexecuted_blocks=1 00:23:36.470 00:23:36.470 ' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.470 --rc genhtml_branch_coverage=1 00:23:36.470 --rc genhtml_function_coverage=1 00:23:36.470 --rc genhtml_legend=1 00:23:36.470 --rc geninfo_all_blocks=1 00:23:36.470 --rc geninfo_unexecuted_blocks=1 00:23:36.470 00:23:36.470 ' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.470 07:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:44.745 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:44.745 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.745 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:44.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:44.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.746 07:33:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:23:44.746 00:23:44.746 --- 10.0.0.2 ping statistics --- 00:23:44.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.746 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:23:44.746 00:23:44.746 --- 10.0.0.1 ping statistics --- 00:23:44.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.746 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1501714 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1501714 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1501714 ']' 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.746 07:33:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:44.746 [2024-11-26 07:33:12.192276] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:23:44.746 [2024-11-26 07:33:12.192341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.746 [2024-11-26 07:33:12.294564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.746 [2024-11-26 07:33:12.347977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.746 [2024-11-26 07:33:12.348033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.746 [2024-11-26 07:33:12.348042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.746 [2024-11-26 07:33:12.348049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.746 [2024-11-26 07:33:12.348055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.746 [2024-11-26 07:33:12.348821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.010 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 Malloc0 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 [2024-11-26 07:33:13.189133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:45.272 [2024-11-26 07:33:13.225465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.272 07:33:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:45.272 [2024-11-26 07:33:13.336279] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:46.658 Initializing NVMe Controllers 00:23:46.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:46.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:46.658 Initialization complete. Launching workers. 00:23:46.658 ======================================================== 00:23:46.658 Latency(us) 00:23:46.658 Device Information : IOPS MiB/s Average min max 00:23:46.658 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.81 8010.04 63851.64 00:23:46.658 ======================================================== 00:23:46.658 Total : 129.00 16.12 32294.81 8010.04 63851.64 00:23:46.658 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.658 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.658 rmmod nvme_tcp 00:23:46.919 rmmod nvme_fabrics 00:23:46.919 rmmod nvme_keyring 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1501714 ']' 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1501714 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1501714 ']' 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1501714 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501714 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501714' 00:23:46.919 killing process with pid 1501714 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1501714 00:23:46.919 07:33:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1501714 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.180 07:33:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.092 00:23:49.092 real 0m12.798s 00:23:49.092 user 0m5.074s 00:23:49.092 sys 0m6.314s 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:49.092 ************************************ 00:23:49.092 END TEST nvmf_wait_for_buf 00:23:49.092 ************************************ 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.092 07:33:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:57.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:57.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.234 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:57.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:57.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.235 ************************************ 00:23:57.235 START TEST nvmf_perf_adq 00:23:57.235 ************************************ 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:57.235 * Looking for test storage... 00:23:57.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.235 --rc genhtml_branch_coverage=1 00:23:57.235 --rc genhtml_function_coverage=1 00:23:57.235 --rc genhtml_legend=1 00:23:57.235 --rc geninfo_all_blocks=1 00:23:57.235 --rc geninfo_unexecuted_blocks=1 00:23:57.235 00:23:57.235 ' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.235 --rc genhtml_branch_coverage=1 00:23:57.235 --rc genhtml_function_coverage=1 00:23:57.235 --rc genhtml_legend=1 00:23:57.235 --rc geninfo_all_blocks=1 00:23:57.235 --rc geninfo_unexecuted_blocks=1 00:23:57.235 00:23:57.235 ' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.235 --rc genhtml_branch_coverage=1 00:23:57.235 --rc genhtml_function_coverage=1 00:23:57.235 --rc genhtml_legend=1 00:23:57.235 --rc geninfo_all_blocks=1 00:23:57.235 --rc geninfo_unexecuted_blocks=1 00:23:57.235 00:23:57.235 ' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:57.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.235 --rc genhtml_branch_coverage=1 00:23:57.235 --rc genhtml_function_coverage=1 00:23:57.235 --rc genhtml_legend=1 00:23:57.235 --rc geninfo_all_blocks=1 00:23:57.235 --rc geninfo_unexecuted_blocks=1 00:23:57.235 00:23:57.235 ' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.235 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.236 07:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.826 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:03.827 07:33:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:05.740 07:33:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:07.656 07:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.950 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:12.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:12.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:12.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:12.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:24:12.951 00:24:12.951 --- 10.0.0.2 ping statistics --- 00:24:12.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.951 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:24:12.951 00:24:12.951 --- 10.0.0.1 ping statistics --- 00:24:12.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.951 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1511933 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1511933 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1511933 ']' 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.951 07:33:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:12.951 [2024-11-26 07:33:40.943817] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:24:12.951 [2024-11-26 07:33:40.943882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.213 [2024-11-26 07:33:41.043350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.213 [2024-11-26 07:33:41.098861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.213 [2024-11-26 07:33:41.098911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.213 [2024-11-26 07:33:41.098920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.213 [2024-11-26 07:33:41.098928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.213 [2024-11-26 07:33:41.098934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.213 [2024-11-26 07:33:41.101289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.213 [2024-11-26 07:33:41.101569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.213 [2024-11-26 07:33:41.101729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.213 [2024-11-26 07:33:41.101732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.785 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 [2024-11-26 07:33:41.973953] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 Malloc1 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 [2024-11-26 07:33:42.049353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1512287 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:24:14.047 07:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:24:16.599 "tick_rate": 2400000000, 00:24:16.599 "poll_groups": [ 00:24:16.599 { 00:24:16.599 "name": "nvmf_tgt_poll_group_000", 00:24:16.599 "admin_qpairs": 1, 00:24:16.599 "io_qpairs": 1, 00:24:16.599 "current_admin_qpairs": 1, 00:24:16.599 "current_io_qpairs": 1, 00:24:16.599 "pending_bdev_io": 0, 00:24:16.599 "completed_nvme_io": 15619, 00:24:16.599 "transports": [ 00:24:16.599 { 00:24:16.599 "trtype": "TCP" 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "name": "nvmf_tgt_poll_group_001", 00:24:16.599 "admin_qpairs": 0, 00:24:16.599 "io_qpairs": 1, 00:24:16.599 "current_admin_qpairs": 0, 00:24:16.599 "current_io_qpairs": 1, 00:24:16.599 "pending_bdev_io": 0, 00:24:16.599 "completed_nvme_io": 17116, 00:24:16.599 "transports": [ 00:24:16.599 { 00:24:16.599 "trtype": "TCP" 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "name": "nvmf_tgt_poll_group_002", 00:24:16.599 "admin_qpairs": 0, 00:24:16.599 "io_qpairs": 1, 00:24:16.599 "current_admin_qpairs": 0, 00:24:16.599 "current_io_qpairs": 1, 00:24:16.599 "pending_bdev_io": 0, 00:24:16.599 "completed_nvme_io": 17683, 00:24:16.599 "transports": [ 00:24:16.599 { 00:24:16.599 "trtype": "TCP" 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }, 00:24:16.599 { 00:24:16.599 "name": "nvmf_tgt_poll_group_003", 00:24:16.599 "admin_qpairs": 0, 00:24:16.599 "io_qpairs": 1, 00:24:16.599 "current_admin_qpairs": 0, 00:24:16.599 "current_io_qpairs": 1, 00:24:16.599 "pending_bdev_io": 0, 00:24:16.599 "completed_nvme_io": 15898, 00:24:16.599 "transports": [ 00:24:16.599 { 00:24:16.599 "trtype": "TCP" 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 } 00:24:16.599 ] 00:24:16.599 }' 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:24:16.599 07:33:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1512287 00:24:24.734 Initializing NVMe Controllers 00:24:24.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:24.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:24.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:24.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:24.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:24.734 Initialization complete. Launching workers. 00:24:24.734 ======================================================== 00:24:24.734 Latency(us) 00:24:24.734 Device Information : IOPS MiB/s Average min max 00:24:24.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12366.51 48.31 5175.53 1421.48 11915.97 00:24:24.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13428.61 52.46 4765.50 1435.31 15246.17 00:24:24.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13229.21 51.68 4851.61 965.98 44734.97 00:24:24.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12650.71 49.42 5074.78 1304.84 44239.58 00:24:24.734 ======================================================== 00:24:24.734 Total : 51675.04 201.86 4961.38 965.98 44734.97 00:24:24.734 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.734 rmmod nvme_tcp 00:24:24.734 rmmod nvme_fabrics 00:24:24.734 rmmod nvme_keyring 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1511933 ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1511933 ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1511933' 00:24:24.734 killing process with pid 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1511933 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.734 07:33:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.645 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.645 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:24:26.645 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:24:26.645 07:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:24:28.557 07:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:24:30.470 07:33:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:35.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:35.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:35.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:35.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.763 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:24:35.764 00:24:35.764 --- 10.0.0.2 ping statistics --- 00:24:35.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.764 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:24:35.764 00:24:35.764 --- 10.0.0.1 ping statistics --- 00:24:35.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.764 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:35.764 net.core.busy_poll = 1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:35.764 net.core.busy_read = 1 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:35.764 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1516755 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1516755 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1516755 ']' 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.026 07:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.026 [2024-11-26 07:34:04.031195] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:24:36.026 [2024-11-26 07:34:04.031246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.286 [2024-11-26 07:34:04.124840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:36.286 [2024-11-26 07:34:04.160496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.286 [2024-11-26 07:34:04.160526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.286 [2024-11-26 07:34:04.160534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.286 [2024-11-26 07:34:04.160541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.286 [2024-11-26 07:34:04.160546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.286 [2024-11-26 07:34:04.162015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.286 [2024-11-26 07:34:04.162179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.286 [2024-11-26 07:34:04.162278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.286 [2024-11-26 07:34:04.162381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.856 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.857 07:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 [2024-11-26 07:34:05.034484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 Malloc1 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:37.117 [2024-11-26 07:34:05.110230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1517105 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:24:37.117 07:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:24:39.662 "tick_rate": 2400000000, 00:24:39.662 "poll_groups": [ 00:24:39.662 { 00:24:39.662 "name": "nvmf_tgt_poll_group_000", 00:24:39.662 "admin_qpairs": 1, 00:24:39.662 "io_qpairs": 4, 00:24:39.662 "current_admin_qpairs": 1, 00:24:39.662 "current_io_qpairs": 4, 00:24:39.662 "pending_bdev_io": 0, 00:24:39.662 "completed_nvme_io": 34690, 00:24:39.662 "transports": [ 00:24:39.662 { 00:24:39.662 "trtype": "TCP" 00:24:39.662 } 00:24:39.662 ] 00:24:39.662 }, 00:24:39.662 { 00:24:39.662 "name": "nvmf_tgt_poll_group_001", 00:24:39.662 "admin_qpairs": 0, 00:24:39.662 "io_qpairs": 0, 00:24:39.662 "current_admin_qpairs": 0, 00:24:39.662 "current_io_qpairs": 0, 00:24:39.662 "pending_bdev_io": 0, 00:24:39.662 "completed_nvme_io": 0, 00:24:39.662 "transports": [ 00:24:39.662 { 00:24:39.662 "trtype": "TCP" 00:24:39.662 } 00:24:39.662 ] 00:24:39.662 }, 00:24:39.662 { 00:24:39.662 "name": "nvmf_tgt_poll_group_002", 00:24:39.662 "admin_qpairs": 0, 00:24:39.662 "io_qpairs": 0, 00:24:39.662 "current_admin_qpairs": 0, 00:24:39.662 "current_io_qpairs": 0, 00:24:39.662 "pending_bdev_io": 0, 00:24:39.662 "completed_nvme_io": 0, 00:24:39.662 "transports": [ 00:24:39.662 { 00:24:39.662 "trtype": "TCP" 00:24:39.662 } 00:24:39.662 ] 00:24:39.662 }, 00:24:39.662 { 00:24:39.662 "name": "nvmf_tgt_poll_group_003", 00:24:39.662 "admin_qpairs": 0, 00:24:39.662 "io_qpairs": 0, 00:24:39.662 "current_admin_qpairs": 0, 00:24:39.662 "current_io_qpairs": 0, 00:24:39.662 "pending_bdev_io": 0, 00:24:39.662 "completed_nvme_io": 0, 00:24:39.662 "transports": [ 00:24:39.662 { 00:24:39.662 "trtype": "TCP" 00:24:39.662 } 00:24:39.662 ] 00:24:39.662 } 00:24:39.662 ] 00:24:39.662 }' 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:24:39.662 07:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1517105 00:24:47.799 Initializing NVMe Controllers 00:24:47.799 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:47.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:47.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:47.799 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:47.799 Initialization complete. Launching workers. 00:24:47.799 ======================================================== 00:24:47.799 Latency(us) 00:24:47.799 Device Information : IOPS MiB/s Average min max 00:24:47.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8916.00 34.83 7178.56 1016.90 53345.86 00:24:47.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5695.40 22.25 11236.56 1417.80 56201.44 00:24:47.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5793.20 22.63 11048.70 1386.88 59445.06 00:24:47.799 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5022.50 19.62 12741.82 1389.22 59295.54 00:24:47.799 ======================================================== 00:24:47.800 Total : 25427.09 99.32 10068.15 1016.90 59445.06 00:24:47.800 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.800 rmmod nvme_tcp 00:24:47.800 rmmod nvme_fabrics 00:24:47.800 rmmod nvme_keyring 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1516755 ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1516755 ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1516755' 00:24:47.800 killing process with pid 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1516755 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.800 07:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:51.102 00:24:51.102 real 0m54.357s 00:24:51.102 user 2m50.867s 00:24:51.102 sys 0m11.340s 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:51.102 ************************************ 00:24:51.102 END TEST nvmf_perf_adq 00:24:51.102 ************************************ 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:51.102 ************************************ 00:24:51.102 START TEST nvmf_shutdown 00:24:51.102 ************************************ 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:51.102 * Looking for test storage... 00:24:51.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:51.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.102 --rc genhtml_branch_coverage=1 00:24:51.102 --rc genhtml_function_coverage=1 00:24:51.102 --rc genhtml_legend=1 00:24:51.102 --rc geninfo_all_blocks=1 00:24:51.102 --rc geninfo_unexecuted_blocks=1 00:24:51.102 00:24:51.102 ' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:51.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.102 --rc genhtml_branch_coverage=1 00:24:51.102 --rc genhtml_function_coverage=1 00:24:51.102 --rc genhtml_legend=1 00:24:51.102 --rc geninfo_all_blocks=1 00:24:51.102 --rc geninfo_unexecuted_blocks=1 00:24:51.102 00:24:51.102 ' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:51.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.102 --rc genhtml_branch_coverage=1 00:24:51.102 --rc genhtml_function_coverage=1 00:24:51.102 --rc genhtml_legend=1 00:24:51.102 --rc geninfo_all_blocks=1 00:24:51.102 --rc geninfo_unexecuted_blocks=1 00:24:51.102 00:24:51.102 ' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:51.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:51.102 --rc genhtml_branch_coverage=1 00:24:51.102 --rc genhtml_function_coverage=1 00:24:51.102 --rc genhtml_legend=1 00:24:51.102 --rc geninfo_all_blocks=1 00:24:51.102 --rc geninfo_unexecuted_blocks=1 00:24:51.102 00:24:51.102 ' 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.102 07:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.102 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:51.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:51.103 ************************************ 00:24:51.103 START TEST nvmf_shutdown_tc1 00:24:51.103 ************************************ 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:51.103 07:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:59.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:59.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:59.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.443 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:59.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:24:59.444 00:24:59.444 --- 10.0.0.2 ping statistics --- 00:24:59.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.444 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:24:59.444 00:24:59.444 --- 10.0.0.1 ping statistics --- 00:24:59.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.444 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1523583 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1523583 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1523583 ']' 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.444 07:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.444 [2024-11-26 07:34:26.746086] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:24:59.444 [2024-11-26 07:34:26.746151] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.444 [2024-11-26 07:34:26.846756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.444 [2024-11-26 07:34:26.899049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.444 [2024-11-26 07:34:26.899099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.444 [2024-11-26 07:34:26.899112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.444 [2024-11-26 07:34:26.899119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.444 [2024-11-26 07:34:26.899125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.444 [2024-11-26 07:34:26.901146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.444 [2024-11-26 07:34:26.901316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.444 [2024-11-26 07:34:26.901616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.444 [2024-11-26 07:34:26.901619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.705 [2024-11-26 07:34:27.606490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.705 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.706 07:34:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:59.706 Malloc1 00:24:59.706 [2024-11-26 07:34:27.736843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.706 Malloc2 00:24:59.967 Malloc3 00:24:59.967 Malloc4 00:24:59.967 Malloc5 00:24:59.967 Malloc6 00:24:59.967 Malloc7 00:24:59.967 Malloc8 00:25:00.229 Malloc9 00:25:00.229 Malloc10 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1523960 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1523960 /var/tmp/bdevperf.sock 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1523960 ']' 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.229 { 00:25:00.229 "params": { 00:25:00.229 "name": "Nvme$subsystem", 00:25:00.229 "trtype": "$TEST_TRANSPORT", 00:25:00.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.229 "adrfam": "ipv4", 00:25:00.229 "trsvcid": "$NVMF_PORT", 00:25:00.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.229 "hdgst": ${hdgst:-false}, 00:25:00.229 "ddgst": ${ddgst:-false} 00:25:00.229 }, 00:25:00.229 "method": "bdev_nvme_attach_controller" 00:25:00.229 } 00:25:00.229 EOF 00:25:00.229 )") 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.229 { 00:25:00.229 "params": { 00:25:00.229 "name": "Nvme$subsystem", 00:25:00.229 "trtype": "$TEST_TRANSPORT", 00:25:00.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.229 "adrfam": "ipv4", 00:25:00.229 "trsvcid": "$NVMF_PORT", 00:25:00.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.229 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.229 "hdgst": ${hdgst:-false}, 00:25:00.229 "ddgst": ${ddgst:-false} 00:25:00.229 }, 00:25:00.229 "method": "bdev_nvme_attach_controller" 00:25:00.229 } 00:25:00.229 EOF 00:25:00.229 )") 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.229 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 [2024-11-26 07:34:28.264286] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:00.230 [2024-11-26 07:34:28.264358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.230 "method": "bdev_nvme_attach_controller" 00:25:00.230 } 00:25:00.230 EOF 00:25:00.230 )") 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:00.230 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:00.230 { 00:25:00.230 "params": { 00:25:00.230 "name": "Nvme$subsystem", 00:25:00.230 "trtype": "$TEST_TRANSPORT", 00:25:00.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:00.230 "adrfam": "ipv4", 00:25:00.230 "trsvcid": "$NVMF_PORT", 00:25:00.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:00.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:00.230 "hdgst": ${hdgst:-false}, 00:25:00.230 "ddgst": ${ddgst:-false} 00:25:00.230 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 } 00:25:00.231 EOF 00:25:00.231 )") 00:25:00.231 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:00.231 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:00.231 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:00.231 07:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme1", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme2", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme3", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme4", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme5", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme6", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme7", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme8", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme9", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 },{ 00:25:00.231 "params": { 00:25:00.231 "name": "Nvme10", 00:25:00.231 "trtype": "tcp", 00:25:00.231 "traddr": "10.0.0.2", 00:25:00.231 "adrfam": "ipv4", 00:25:00.231 "trsvcid": "4420", 00:25:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:00.231 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:00.231 "hdgst": false, 00:25:00.231 "ddgst": false 00:25:00.231 }, 00:25:00.231 "method": "bdev_nvme_attach_controller" 00:25:00.231 }' 00:25:00.492 [2024-11-26 07:34:28.361047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.492 [2024-11-26 07:34:28.414945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1523960 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:01.876 07:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:02.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1523960 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1523583 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.820 { 00:25:02.820 "params": { 00:25:02.820 "name": "Nvme$subsystem", 00:25:02.820 "trtype": "$TEST_TRANSPORT", 00:25:02.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.820 "adrfam": "ipv4", 00:25:02.820 "trsvcid": "$NVMF_PORT", 00:25:02.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.820 "hdgst": ${hdgst:-false}, 00:25:02.820 "ddgst": ${ddgst:-false} 00:25:02.820 }, 00:25:02.820 "method": "bdev_nvme_attach_controller" 00:25:02.820 } 00:25:02.820 EOF 00:25:02.820 )") 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.820 { 00:25:02.820 "params": { 00:25:02.820 "name": "Nvme$subsystem", 00:25:02.820 "trtype": "$TEST_TRANSPORT", 00:25:02.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.820 "adrfam": "ipv4", 00:25:02.820 "trsvcid": "$NVMF_PORT", 00:25:02.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.820 "hdgst": ${hdgst:-false}, 00:25:02.820 "ddgst": ${ddgst:-false} 00:25:02.820 }, 00:25:02.820 "method": "bdev_nvme_attach_controller" 00:25:02.820 } 00:25:02.820 EOF 00:25:02.820 )") 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.820 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.820 { 00:25:02.820 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 [2024-11-26 07:34:30.766976] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:02.821 [2024-11-26 07:34:30.767029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524374 ] 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:02.821 { 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme$subsystem", 00:25:02.821 "trtype": "$TEST_TRANSPORT", 00:25:02.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "$NVMF_PORT", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:02.821 "hdgst": ${hdgst:-false}, 00:25:02.821 "ddgst": ${ddgst:-false} 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 } 00:25:02.821 EOF 00:25:02.821 )") 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:02.821 07:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme1", 00:25:02.821 "trtype": "tcp", 00:25:02.821 "traddr": "10.0.0.2", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "4420", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:02.821 "hdgst": false, 00:25:02.821 "ddgst": false 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 },{ 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme2", 00:25:02.821 "trtype": "tcp", 00:25:02.821 "traddr": "10.0.0.2", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "4420", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:02.821 "hdgst": false, 00:25:02.821 "ddgst": false 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 },{ 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme3", 00:25:02.821 "trtype": "tcp", 00:25:02.821 "traddr": "10.0.0.2", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "4420", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:02.821 "hdgst": false, 00:25:02.821 "ddgst": false 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 },{ 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme4", 00:25:02.821 "trtype": "tcp", 00:25:02.821 "traddr": "10.0.0.2", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "4420", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:02.821 "hdgst": false, 00:25:02.821 "ddgst": false 00:25:02.821 }, 00:25:02.821 "method": "bdev_nvme_attach_controller" 00:25:02.821 },{ 00:25:02.821 "params": { 00:25:02.821 "name": "Nvme5", 00:25:02.821 "trtype": "tcp", 00:25:02.821 "traddr": "10.0.0.2", 00:25:02.821 "adrfam": "ipv4", 00:25:02.821 "trsvcid": "4420", 00:25:02.821 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:02.821 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 },{ 00:25:02.822 "params": { 00:25:02.822 "name": "Nvme6", 00:25:02.822 "trtype": "tcp", 00:25:02.822 "traddr": "10.0.0.2", 00:25:02.822 "adrfam": "ipv4", 00:25:02.822 "trsvcid": "4420", 00:25:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:02.822 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 },{ 00:25:02.822 "params": { 00:25:02.822 "name": "Nvme7", 00:25:02.822 "trtype": "tcp", 00:25:02.822 "traddr": "10.0.0.2", 00:25:02.822 "adrfam": "ipv4", 00:25:02.822 "trsvcid": "4420", 00:25:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:02.822 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 },{ 00:25:02.822 "params": { 00:25:02.822 "name": "Nvme8", 00:25:02.822 "trtype": "tcp", 00:25:02.822 "traddr": "10.0.0.2", 00:25:02.822 "adrfam": "ipv4", 00:25:02.822 "trsvcid": "4420", 00:25:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:02.822 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 },{ 00:25:02.822 "params": { 00:25:02.822 "name": "Nvme9", 00:25:02.822 "trtype": "tcp", 00:25:02.822 "traddr": "10.0.0.2", 00:25:02.822 "adrfam": "ipv4", 00:25:02.822 "trsvcid": "4420", 00:25:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:02.822 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 },{ 00:25:02.822 "params": { 00:25:02.822 "name": "Nvme10", 00:25:02.822 "trtype": "tcp", 00:25:02.822 "traddr": "10.0.0.2", 00:25:02.822 "adrfam": "ipv4", 00:25:02.822 "trsvcid": "4420", 00:25:02.822 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:02.822 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:02.822 "hdgst": false, 00:25:02.822 "ddgst": false 00:25:02.822 }, 00:25:02.822 "method": "bdev_nvme_attach_controller" 00:25:02.822 }' 00:25:02.822 [2024-11-26 07:34:30.858225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.822 [2024-11-26 07:34:30.894094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.205 Running I/O for 1 seconds... 00:25:05.407 1866.00 IOPS, 116.62 MiB/s 00:25:05.407 Latency(us) 00:25:05.407 [2024-11-26T06:34:33.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.407 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme1n1 : 1.16 219.91 13.74 0.00 0.00 288044.16 19770.03 249910.61 00:25:05.407 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme2n1 : 1.15 221.98 13.87 0.00 0.00 275764.48 14964.05 251658.24 00:25:05.407 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme3n1 : 1.09 235.05 14.69 0.00 0.00 259777.49 17367.04 260396.37 00:25:05.407 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme4n1 : 1.10 232.33 14.52 0.00 0.00 258131.20 23374.51 244667.73 00:25:05.407 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme5n1 : 1.10 233.22 14.58 0.00 0.00 252203.31 16384.00 232434.35 00:25:05.407 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme6n1 : 1.17 219.52 13.72 0.00 0.00 264481.49 18022.40 255153.49 00:25:05.407 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme7n1 : 1.20 266.27 16.64 0.00 0.00 214800.90 18568.53 263891.63 00:25:05.407 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme8n1 : 1.21 265.46 16.59 0.00 0.00 211697.66 15182.51 234181.97 00:25:05.407 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme9n1 : 1.19 214.56 13.41 0.00 0.00 256876.37 19223.89 274377.39 00:25:05.407 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:05.407 Verification LBA range: start 0x0 length 0x400 00:25:05.407 Nvme10n1 : 1.21 318.10 19.88 0.00 0.00 169814.90 8137.39 242920.11 00:25:05.407 [2024-11-26T06:34:33.505Z] =================================================================================================================== 00:25:05.407 [2024-11-26T06:34:33.505Z] Total : 2426.38 151.65 0.00 0.00 240284.00 8137.39 274377.39 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.407 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.407 rmmod nvme_tcp 00:25:05.667 rmmod nvme_fabrics 00:25:05.667 rmmod nvme_keyring 00:25:05.667 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1523583 ']' 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1523583 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1523583 ']' 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1523583 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523583 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523583' 00:25:05.668 killing process with pid 1523583 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1523583 00:25:05.668 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1523583 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.929 07:34:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.844 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.844 00:25:07.844 real 0m16.854s 00:25:07.844 user 0m33.645s 00:25:07.844 sys 0m7.056s 00:25:07.844 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.844 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:07.844 ************************************ 00:25:07.844 END TEST nvmf_shutdown_tc1 00:25:07.844 ************************************ 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:08.105 ************************************ 00:25:08.105 START TEST nvmf_shutdown_tc2 00:25:08.105 ************************************ 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:25:08.105 07:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.105 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.106 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:25:08.368 00:25:08.368 --- 10.0.0.2 ping statistics --- 00:25:08.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.368 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:25:08.368 00:25:08.368 --- 10.0.0.1 ping statistics --- 00:25:08.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.368 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1525665 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1525665 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1525665 ']' 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.368 07:34:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:08.368 [2024-11-26 07:34:36.436112] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:08.368 [2024-11-26 07:34:36.436184] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.628 [2024-11-26 07:34:36.529731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.628 [2024-11-26 07:34:36.565056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.628 [2024-11-26 07:34:36.565085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.628 [2024-11-26 07:34:36.565090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.628 [2024-11-26 07:34:36.565095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.628 [2024-11-26 07:34:36.565100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.628 [2024-11-26 07:34:36.566552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.628 [2024-11-26 07:34:36.566678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:08.628 [2024-11-26 07:34:36.566801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.629 [2024-11-26 07:34:36.566802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:09.198 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.198 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:09.198 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.198 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.198 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.199 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.199 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:09.199 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.199 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.199 [2024-11-26 07:34:37.289470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.459 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.459 Malloc1 00:25:09.459 [2024-11-26 07:34:37.399887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.459 Malloc2 00:25:09.459 Malloc3 00:25:09.459 Malloc4 00:25:09.459 Malloc5 00:25:09.720 Malloc6 00:25:09.720 Malloc7 00:25:09.720 Malloc8 00:25:09.720 Malloc9 00:25:09.720 Malloc10 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1525894 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1525894 /var/tmp/bdevperf.sock 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1525894 ']' 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.720 { 00:25:09.720 "params": { 00:25:09.720 "name": "Nvme$subsystem", 00:25:09.720 "trtype": "$TEST_TRANSPORT", 00:25:09.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.720 "adrfam": "ipv4", 00:25:09.720 "trsvcid": "$NVMF_PORT", 00:25:09.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.720 "hdgst": ${hdgst:-false}, 00:25:09.720 "ddgst": ${ddgst:-false} 00:25:09.720 }, 00:25:09.720 "method": "bdev_nvme_attach_controller" 00:25:09.720 } 00:25:09.720 EOF 00:25:09.720 )") 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.720 { 00:25:09.720 "params": { 00:25:09.720 "name": "Nvme$subsystem", 00:25:09.720 "trtype": "$TEST_TRANSPORT", 00:25:09.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.720 "adrfam": "ipv4", 00:25:09.720 "trsvcid": "$NVMF_PORT", 00:25:09.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.720 "hdgst": ${hdgst:-false}, 00:25:09.720 "ddgst": ${ddgst:-false} 00:25:09.720 }, 00:25:09.720 "method": "bdev_nvme_attach_controller" 00:25:09.720 } 00:25:09.720 EOF 00:25:09.720 )") 00:25:09.720 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 [2024-11-26 07:34:37.845028] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:09.982 [2024-11-26 07:34:37.845081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525894 ] 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.982 "adrfam": "ipv4", 00:25:09.982 "trsvcid": "$NVMF_PORT", 00:25:09.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.982 "hdgst": ${hdgst:-false}, 00:25:09.982 "ddgst": ${ddgst:-false} 00:25:09.982 }, 00:25:09.982 "method": "bdev_nvme_attach_controller" 00:25:09.982 } 00:25:09.982 EOF 00:25:09.982 )") 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:09.982 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:09.982 { 00:25:09.982 "params": { 00:25:09.982 "name": "Nvme$subsystem", 00:25:09.982 "trtype": "$TEST_TRANSPORT", 00:25:09.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "$NVMF_PORT", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:09.983 "hdgst": ${hdgst:-false}, 00:25:09.983 "ddgst": ${ddgst:-false} 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 } 00:25:09.983 EOF 00:25:09.983 )") 00:25:09.983 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:25:09.983 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:25:09.983 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:25:09.983 07:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme1", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme2", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme3", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme4", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme5", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme6", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme7", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme8", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme9", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 },{ 00:25:09.983 "params": { 00:25:09.983 "name": "Nvme10", 00:25:09.983 "trtype": "tcp", 00:25:09.983 "traddr": "10.0.0.2", 00:25:09.983 "adrfam": "ipv4", 00:25:09.983 "trsvcid": "4420", 00:25:09.983 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:09.983 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:09.983 "hdgst": false, 00:25:09.983 "ddgst": false 00:25:09.983 }, 00:25:09.983 "method": "bdev_nvme_attach_controller" 00:25:09.983 }' 00:25:09.983 [2024-11-26 07:34:37.933966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.983 [2024-11-26 07:34:37.970581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.369 Running I/O for 10 seconds... 00:25:11.369 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.369 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:11.369 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:11.369 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.369 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:11.630 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:11.892 07:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.153 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1525894 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1525894 ']' 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1525894 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.154 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525894 00:25:12.414 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.414 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.414 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525894' 00:25:12.414 killing process with pid 1525894 00:25:12.414 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1525894 00:25:12.414 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1525894 00:25:12.414 Received shutdown signal, test time was about 0.972025 seconds 00:25:12.414 00:25:12.414 Latency(us) 00:25:12.414 [2024-11-26T06:34:40.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme1n1 : 0.94 205.29 12.83 0.00 0.00 308200.68 22609.92 246415.36 00:25:12.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme2n1 : 0.97 263.68 16.48 0.00 0.00 235137.49 23046.83 246415.36 00:25:12.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme3n1 : 0.94 204.30 12.77 0.00 0.00 296947.48 16384.00 249910.61 00:25:12.414 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme4n1 : 0.96 268.00 16.75 0.00 0.00 221872.32 11141.12 256901.12 00:25:12.414 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme5n1 : 0.95 202.78 12.67 0.00 0.00 286692.98 22282.24 253405.87 00:25:12.414 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme6n1 : 0.97 265.28 16.58 0.00 0.00 214697.60 18022.40 244667.73 00:25:12.414 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme7n1 : 0.97 268.31 16.77 0.00 0.00 207608.39 2266.45 244667.73 00:25:12.414 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme8n1 : 0.96 266.09 16.63 0.00 0.00 204670.93 17803.95 249910.61 00:25:12.414 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme9n1 : 0.96 266.90 16.68 0.00 0.00 198956.59 20425.39 219327.15 00:25:12.414 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:12.414 Verification LBA range: start 0x0 length 0x400 00:25:12.414 Nvme10n1 : 0.95 201.75 12.61 0.00 0.00 256738.99 16602.45 265639.25 00:25:12.414 [2024-11-26T06:34:40.512Z] =================================================================================================================== 00:25:12.414 [2024-11-26T06:34:40.513Z] Total : 2412.40 150.77 0.00 0.00 238211.14 2266.45 265639.25 00:25:12.415 07:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1525665 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.799 rmmod nvme_tcp 00:25:13.799 rmmod nvme_fabrics 00:25:13.799 rmmod nvme_keyring 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1525665 ']' 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1525665 00:25:13.799 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1525665 ']' 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1525665 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525665 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525665' 00:25:13.800 killing process with pid 1525665 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1525665 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1525665 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.800 07:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.348 00:25:16.348 real 0m7.953s 00:25:16.348 user 0m24.085s 00:25:16.348 sys 0m1.315s 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.348 ************************************ 00:25:16.348 END TEST nvmf_shutdown_tc2 00:25:16.348 ************************************ 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.348 07:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:16.348 ************************************ 00:25:16.348 START TEST nvmf_shutdown_tc3 00:25:16.348 ************************************ 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:16.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:16.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:16.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.348 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:16.349 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:25:16.349 00:25:16.349 --- 10.0.0.2 ping statistics --- 00:25:16.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.349 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:25:16.349 00:25:16.349 --- 10.0.0.1 ping statistics --- 00:25:16.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.349 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1527313 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1527313 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1527313 ']' 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.349 07:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:16.610 [2024-11-26 07:34:44.473772] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:16.610 [2024-11-26 07:34:44.473831] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.610 [2024-11-26 07:34:44.569056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.610 [2024-11-26 07:34:44.605108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.610 [2024-11-26 07:34:44.605138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.610 [2024-11-26 07:34:44.605146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.610 [2024-11-26 07:34:44.605153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.610 [2024-11-26 07:34:44.605164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.610 [2024-11-26 07:34:44.606685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.610 [2024-11-26 07:34:44.606834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.610 [2024-11-26 07:34:44.606954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.610 [2024-11-26 07:34:44.606956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.181 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.181 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:17.181 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.442 [2024-11-26 07:34:45.320278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.442 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.442 Malloc1 00:25:17.442 [2024-11-26 07:34:45.436936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.442 Malloc2 00:25:17.442 Malloc3 00:25:17.442 Malloc4 00:25:17.703 Malloc5 00:25:17.703 Malloc6 00:25:17.703 Malloc7 00:25:17.703 Malloc8 00:25:17.703 Malloc9 00:25:17.703 Malloc10 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1527701 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1527701 /var/tmp/bdevperf.sock 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1527701 ']' 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.965 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 [2024-11-26 07:34:45.889575] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:17.966 [2024-11-26 07:34:45.889630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527701 ] 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.966 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.966 { 00:25:17.966 "params": { 00:25:17.966 "name": "Nvme$subsystem", 00:25:17.966 "trtype": "$TEST_TRANSPORT", 00:25:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.966 "adrfam": "ipv4", 00:25:17.966 "trsvcid": "$NVMF_PORT", 00:25:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.966 "hdgst": ${hdgst:-false}, 00:25:17.966 "ddgst": ${ddgst:-false} 00:25:17.966 }, 00:25:17.966 "method": "bdev_nvme_attach_controller" 00:25:17.966 } 00:25:17.966 EOF 00:25:17.966 )") 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.967 { 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme$subsystem", 00:25:17.967 "trtype": "$TEST_TRANSPORT", 00:25:17.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "$NVMF_PORT", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.967 "hdgst": ${hdgst:-false}, 00:25:17.967 "ddgst": ${ddgst:-false} 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 } 00:25:17.967 EOF 00:25:17.967 )") 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:17.967 { 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme$subsystem", 00:25:17.967 "trtype": "$TEST_TRANSPORT", 00:25:17.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "$NVMF_PORT", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:17.967 "hdgst": ${hdgst:-false}, 00:25:17.967 "ddgst": ${ddgst:-false} 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 } 00:25:17.967 EOF 00:25:17.967 )") 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:25:17.967 07:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme1", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme2", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme3", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme4", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme5", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme6", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme7", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme8", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme9", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 },{ 00:25:17.967 "params": { 00:25:17.967 "name": "Nvme10", 00:25:17.967 "trtype": "tcp", 00:25:17.967 "traddr": "10.0.0.2", 00:25:17.967 "adrfam": "ipv4", 00:25:17.967 "trsvcid": "4420", 00:25:17.967 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:17.967 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:17.967 "hdgst": false, 00:25:17.967 "ddgst": false 00:25:17.967 }, 00:25:17.967 "method": "bdev_nvme_attach_controller" 00:25:17.967 }' 00:25:17.967 [2024-11-26 07:34:45.979253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.967 [2024-11-26 07:34:46.015608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.350 Running I/O for 10 seconds... 00:25:19.350 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.350 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:25:19.350 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:19.350 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.350 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:25:19.610 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:25:19.871 07:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:20.131 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1527313 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1527313 ']' 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1527313 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1527313 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1527313' 00:25:20.408 killing process with pid 1527313 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1527313 00:25:20.408 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1527313 00:25:20.408 [2024-11-26 07:34:48.300081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.408 [2024-11-26 07:34:48.300333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.300441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a9b0 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.409 [2024-11-26 07:34:48.301927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.301931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100c600 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.303626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cad0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.410 [2024-11-26 07:34:48.304874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.304995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100cfc0 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.305995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.411 [2024-11-26 07:34:48.306099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.306214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d490 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.412 [2024-11-26 07:34:48.307219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.307310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100d960 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.308467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.413 [2024-11-26 07:34:48.314561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.413 [2024-11-26 07:34:48.314595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.413 [2024-11-26 07:34:48.314611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.413 [2024-11-26 07:34:48.314620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.413 [2024-11-26 07:34:48.314629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.413 [2024-11-26 07:34:48.314636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.413 [2024-11-26 07:34:48.314644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865cb0 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.314689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91d70 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.314788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcce630 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.314877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86b80 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.314976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.314985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.314993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863fc0 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.315060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b170 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.315149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d610 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.315243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.414 [2024-11-26 07:34:48.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c930 is same with the state(6) to be set 00:25:20.414 [2024-11-26 07:34:48.315358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.414 [2024-11-26 07:34:48.315481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.414 [2024-11-26 07:34:48.315490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.315989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.315999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.415 [2024-11-26 07:34:48.316149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.415 [2024-11-26 07:34:48.316157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.316437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.416 [2024-11-26 07:34:48.316444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.416 [2024-11-26 07:34:48.318385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:20.416 [2024-11-26 07:34:48.318421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x865cb0 (9): Bad file descriptor 00:25:20.416 [2024-11-26 07:34:48.318778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.318846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100de30 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.319107] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319153] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319193] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319228] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319261] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319407] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.416 [2024-11-26 07:34:48.319808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.416 [2024-11-26 07:34:48.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x865cb0 with addr=10.0.0.2, port=4420 00:25:20.416 [2024-11-26 07:34:48.319834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865cb0 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x865cb0 (9): Bad file descriptor 00:25:20.416 [2024-11-26 07:34:48.320044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in err[2024-11-26 07:34:48.320170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with or state 00:25:20.416 the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with [2024-11-26 07:34:48.320180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] cothe state(6) to be set 00:25:20.416 ntroller reinitialization failed 00:25:20.416 [2024-11-26 07:34:48.320189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:20.416 [2024-11-26 07:34:48.320196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:20.416 [2024-11-26 07:34:48.320203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.416 [2024-11-26 07:34:48.320251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-11-26 07:34:48.320470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e320 is same with the state(6) to be set 00:25:20.417 [2024-11-26 07:34:48.320495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.417 [2024-11-26 07:34:48.320752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.417 [2024-11-26 07:34:48.320759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.320986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.320993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.418 [2024-11-26 07:34:48.321002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.418 [2024-11-26 07:34:48.321091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.418 [2024-11-26 07:34:48.321353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.321394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a4e0 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.329351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.419 [2024-11-26 07:34:48.329934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.329943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6a170 is same with the state(6) to be set 00:25:20.419 [2024-11-26 07:34:48.330144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91d70 (9): Bad file descriptor 00:25:20.419 [2024-11-26 07:34:48.330197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.419 [2024-11-26 07:34:48.330208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.419 [2024-11-26 07:34:48.330217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca98a0 is same with the state(6) to be set 00:25:20.420 [2024-11-26 07:34:48.330282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcce630 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc86b80 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863fc0 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85b170 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d610 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c930 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.330394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.420 [2024-11-26 07:34:48.330450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.330458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96c0 is same with the state(6) to be set 00:25:20.420 [2024-11-26 07:34:48.331916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:20.420 [2024-11-26 07:34:48.332079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:20.420 [2024-11-26 07:34:48.332455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.420 [2024-11-26 07:34:48.332493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc86b80 with addr=10.0.0.2, port=4420 00:25:20.420 [2024-11-26 07:34:48.332506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86b80 is same with the state(6) to be set 00:25:20.420 [2024-11-26 07:34:48.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.420 [2024-11-26 07:34:48.333112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x865cb0 with addr=10.0.0.2, port=4420 00:25:20.420 [2024-11-26 07:34:48.333121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865cb0 is same with the state(6) to be set 00:25:20.420 [2024-11-26 07:34:48.333132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc86b80 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.333206] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.420 [2024-11-26 07:34:48.333245] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:20.420 [2024-11-26 07:34:48.333262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x865cb0 (9): Bad file descriptor 00:25:20.420 [2024-11-26 07:34:48.333272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:20.420 [2024-11-26 07:34:48.333280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:20.420 [2024-11-26 07:34:48.333289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:20.420 [2024-11-26 07:34:48.333298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:20.420 [2024-11-26 07:34:48.333351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.420 [2024-11-26 07:34:48.333689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.420 [2024-11-26 07:34:48.333696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.333983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.333993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.421 [2024-11-26 07:34:48.334298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.421 [2024-11-26 07:34:48.334307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.334448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.334458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6b6f0 is same with the state(6) to be set 00:25:20.422 [2024-11-26 07:34:48.334541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:20.422 [2024-11-26 07:34:48.334551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:20.422 [2024-11-26 07:34:48.334559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:20.422 [2024-11-26 07:34:48.334566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:20.422 [2024-11-26 07:34:48.335825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:20.422 [2024-11-26 07:34:48.335845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca98a0 (9): Bad file descriptor 00:25:20.422 [2024-11-26 07:34:48.336515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.422 [2024-11-26 07:34:48.336531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca98a0 with addr=10.0.0.2, port=4420 00:25:20.422 [2024-11-26 07:34:48.336539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca98a0 is same with the state(6) to be set 00:25:20.422 [2024-11-26 07:34:48.336587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca98a0 (9): Bad file descriptor 00:25:20.422 [2024-11-26 07:34:48.336633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:20.422 [2024-11-26 07:34:48.336641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:20.422 [2024-11-26 07:34:48.336649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:20.422 [2024-11-26 07:34:48.336656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:20.422 [2024-11-26 07:34:48.340206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca96c0 (9): Bad file descriptor 00:25:20.422 [2024-11-26 07:34:48.340317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.422 [2024-11-26 07:34:48.340694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.422 [2024-11-26 07:34:48.340703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.340983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.340993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.423 [2024-11-26 07:34:48.341372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.423 [2024-11-26 07:34:48.341380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.341397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.341407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.341414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.341422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa69b60 is same with the state(6) to be set 00:25:20.424 [2024-11-26 07:34:48.342701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.342986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.342993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.424 [2024-11-26 07:34:48.343349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.424 [2024-11-26 07:34:48.343356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.343815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.343823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6ae40 is same with the state(6) to be set 00:25:20.425 [2024-11-26 07:34:48.345101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.425 [2024-11-26 07:34:48.345326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.425 [2024-11-26 07:34:48.345333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.426 [2024-11-26 07:34:48.345901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.426 [2024-11-26 07:34:48.345910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.345917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.345927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.345935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.345944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.345961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.345968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.345977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.345985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.345994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.346236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.346244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc660f0 is same with the state(6) to be set 00:25:20.427 [2024-11-26 07:34:48.347570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.427 [2024-11-26 07:34:48.347923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.427 [2024-11-26 07:34:48.347933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.347941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.347951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.347958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.347968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.347976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.347985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.347993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.428 [2024-11-26 07:34:48.348622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.428 [2024-11-26 07:34:48.348631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.348641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.348649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.348658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.348666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.348675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.348683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.348694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.348703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.348711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc67670 is same with the state(6) to be set 00:25:20.429 [2024-11-26 07:34:48.349983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.349997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.429 [2024-11-26 07:34:48.350541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.429 [2024-11-26 07:34:48.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.350983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.350992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.430 [2024-11-26 07:34:48.351106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.430 [2024-11-26 07:34:48.351114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68bf0 is same with the state(6) to be set 00:25:20.431 [2024-11-26 07:34:48.352405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.431 [2024-11-26 07:34:48.352841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.431 [2024-11-26 07:34:48.352848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.352985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.352996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.432 [2024-11-26 07:34:48.353498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.432 [2024-11-26 07:34:48.353505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.353515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.353522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.353540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.353549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6660 is same with the state(6) to be set 00:25:20.433 [2024-11-26 07:34:48.354796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.354813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.354824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.354835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.354912] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:20.433 [2024-11-26 07:34:48.354929] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:20.433 [2024-11-26 07:34:48.355010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.355023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:25:20.433 [2024-11-26 07:34:48.355370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.433 [2024-11-26 07:34:48.355386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c930 with addr=10.0.0.2, port=4420 00:25:20.433 [2024-11-26 07:34:48.355395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c930 is same with the state(6) to be set 00:25:20.433 [2024-11-26 07:34:48.355584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.433 [2024-11-26 07:34:48.355594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85b170 with addr=10.0.0.2, port=4420 00:25:20.433 [2024-11-26 07:34:48.355601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b170 is same with the state(6) to be set 00:25:20.433 [2024-11-26 07:34:48.355887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.433 [2024-11-26 07:34:48.355897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863fc0 with addr=10.0.0.2, port=4420 00:25:20.433 [2024-11-26 07:34:48.355905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863fc0 is same with the state(6) to be set 00:25:20.433 [2024-11-26 07:34:48.355976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.433 [2024-11-26 07:34:48.355987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc91d70 with addr=10.0.0.2, port=4420 00:25:20.433 [2024-11-26 07:34:48.355995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91d70 is same with the state(6) to be set 00:25:20.433 [2024-11-26 07:34:48.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.433 [2024-11-26 07:34:48.357698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.433 [2024-11-26 07:34:48.357706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.357989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.357999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.434 [2024-11-26 07:34:48.358265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.434 [2024-11-26 07:34:48.358272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:20.435 [2024-11-26 07:34:48.358459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.435 [2024-11-26 07:34:48.358467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6cc20 is same with the state(6) to be set 00:25:20.435 [2024-11-26 07:34:48.360572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:25:20.435 [2024-11-26 07:34:48.360598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:20.435 [2024-11-26 07:34:48.360608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:25:20.435 task offset: 24576 on job bdev=Nvme1n1 fails 00:25:20.435 00:25:20.435 Latency(us) 00:25:20.435 [2024-11-26T06:34:48.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.435 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme1n1 ended in about 0.94 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme1n1 : 0.94 203.65 12.73 67.88 0.00 232979.55 3181.23 253405.87 00:25:20.435 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme2n1 ended in about 0.97 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme2n1 : 0.97 202.62 12.66 66.16 0.00 230638.83 20206.93 227191.47 00:25:20.435 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme3n1 ended in about 0.97 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme3n1 : 0.97 202.12 12.63 66.00 0.00 226484.43 14417.92 251658.24 00:25:20.435 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme4n1 ended in about 0.97 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme4n1 : 0.97 131.67 8.23 65.84 0.00 301212.73 19988.48 272629.76 00:25:20.435 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme5n1 ended in about 0.97 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme5n1 : 0.97 131.34 8.21 65.67 0.00 295576.46 25122.13 255153.49 00:25:20.435 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme6n1 ended in about 0.98 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme6n1 : 0.98 196.52 12.28 65.51 0.00 217380.48 20753.07 251658.24 00:25:20.435 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme7n1 ended in about 0.96 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme7n1 : 0.96 200.74 12.55 66.91 0.00 207401.17 16602.45 253405.87 00:25:20.435 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme8n1 ended in about 0.96 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme8n1 : 0.96 199.89 12.49 66.63 0.00 203554.00 3072.00 248162.99 00:25:20.435 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme9n1 ended in about 0.98 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme9n1 : 0.98 130.04 8.13 65.02 0.00 272939.24 18459.31 274377.39 00:25:20.435 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:20.435 Job: Nvme10n1 ended in about 0.98 seconds with error 00:25:20.435 Verification LBA range: start 0x0 length 0x400 00:25:20.435 Nvme10n1 : 0.98 130.69 8.17 65.35 0.00 265060.69 14854.83 255153.49 00:25:20.435 [2024-11-26T06:34:48.533Z] =================================================================================================================== 00:25:20.435 [2024-11-26T06:34:48.533Z] Total : 1729.29 108.08 660.97 0.00 241015.68 3072.00 274377.39 00:25:20.435 [2024-11-26 07:34:48.387416] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:20.435 [2024-11-26 07:34:48.387462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:25:20.435 [2024-11-26 07:34:48.387887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.435 [2024-11-26 07:34:48.387908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77d610 with addr=10.0.0.2, port=4420 00:25:20.435 [2024-11-26 07:34:48.387920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77d610 is same with the state(6) to be set 00:25:20.435 [2024-11-26 07:34:48.388260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.435 [2024-11-26 07:34:48.388273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcce630 with addr=10.0.0.2, port=4420 00:25:20.435 [2024-11-26 07:34:48.388281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcce630 is same with the state(6) to be set 00:25:20.435 [2024-11-26 07:34:48.388294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c930 (9): Bad file descriptor 00:25:20.435 [2024-11-26 07:34:48.388308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85b170 (9): Bad file descriptor 00:25:20.435 [2024-11-26 07:34:48.388318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863fc0 (9): Bad file descriptor 00:25:20.435 [2024-11-26 07:34:48.388329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91d70 (9): Bad file descriptor 00:25:20.435 [2024-11-26 07:34:48.388639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.435 [2024-11-26 07:34:48.388654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc86b80 with addr=10.0.0.2, port=4420 00:25:20.435 [2024-11-26 07:34:48.388662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc86b80 is same with the state(6) to be set 00:25:20.435 [2024-11-26 07:34:48.388898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.435 [2024-11-26 07:34:48.388908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x865cb0 with addr=10.0.0.2, port=4420 00:25:20.435 [2024-11-26 07:34:48.388915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x865cb0 is same with the state(6) to be set 00:25:20.435 [2024-11-26 07:34:48.389250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.435 [2024-11-26 07:34:48.389261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca98a0 with addr=10.0.0.2, port=4420 00:25:20.435 [2024-11-26 07:34:48.389269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca98a0 is same with the state(6) to be set 00:25:20.436 [2024-11-26 07:34:48.389580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.436 [2024-11-26 07:34:48.389590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xca96c0 with addr=10.0.0.2, port=4420 00:25:20.436 [2024-11-26 07:34:48.389599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca96c0 is same with the state(6) to be set 00:25:20.436 [2024-11-26 07:34:48.389608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77d610 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.389623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcce630 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.389632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.389639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.389649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.389658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.389667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.389674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.389681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.389688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.389696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.389703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.389710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.389716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.389725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.389731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.389739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.389746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.389800] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:25:20.436 [2024-11-26 07:34:48.389813] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:25:20.436 [2024-11-26 07:34:48.390188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc86b80 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.390203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x865cb0 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.390213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca98a0 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.390223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca96c0 (9): Bad file descriptor 00:25:20.436 [2024-11-26 07:34:48.390232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.390239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.390246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.390253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.390261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.390271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.390279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.390285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.390514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:25:20.436 [2024-11-26 07:34:48.390529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:25:20.436 [2024-11-26 07:34:48.390539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:25:20.436 [2024-11-26 07:34:48.390549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:25:20.436 [2024-11-26 07:34:48.390582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.390589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.390597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.390604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.390611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.390618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.390626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.390633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.390640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:25:20.436 [2024-11-26 07:34:48.390646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:25:20.436 [2024-11-26 07:34:48.390654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:25:20.436 [2024-11-26 07:34:48.390662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:25:20.436 [2024-11-26 07:34:48.390669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:25:20.437 [2024-11-26 07:34:48.390675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:25:20.437 [2024-11-26 07:34:48.390683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:25:20.437 [2024-11-26 07:34:48.390690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:25:20.437 [2024-11-26 07:34:48.391028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.437 [2024-11-26 07:34:48.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc91d70 with addr=10.0.0.2, port=4420 00:25:20.437 [2024-11-26 07:34:48.391050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc91d70 is same with the state(6) to be set 00:25:20.437 [2024-11-26 07:34:48.391332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.437 [2024-11-26 07:34:48.391343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863fc0 with addr=10.0.0.2, port=4420 00:25:20.437 [2024-11-26 07:34:48.391350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863fc0 is same with the state(6) to be set 00:25:20.437 [2024-11-26 07:34:48.391683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.437 [2024-11-26 07:34:48.391696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85b170 with addr=10.0.0.2, port=4420 00:25:20.437 [2024-11-26 07:34:48.391705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85b170 is same with the state(6) to be set 00:25:20.437 [2024-11-26 07:34:48.392020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.437 [2024-11-26 07:34:48.392030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85c930 with addr=10.0.0.2, port=4420 00:25:20.437 [2024-11-26 07:34:48.392038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c930 is same with the state(6) to be set 00:25:20.437 [2024-11-26 07:34:48.392068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91d70 (9): Bad file descriptor 00:25:20.437 [2024-11-26 07:34:48.392079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x863fc0 (9): Bad file descriptor 00:25:20.437 [2024-11-26 07:34:48.392089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85b170 (9): Bad file descriptor 00:25:20.437 [2024-11-26 07:34:48.392098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c930 (9): Bad file descriptor 00:25:20.437 [2024-11-26 07:34:48.392126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:25:20.437 [2024-11-26 07:34:48.392134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:25:20.437 [2024-11-26 07:34:48.392141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:25:20.437 [2024-11-26 07:34:48.392148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:25:20.437 [2024-11-26 07:34:48.392156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:25:20.437 [2024-11-26 07:34:48.392170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:25:20.437 [2024-11-26 07:34:48.392178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:25:20.437 [2024-11-26 07:34:48.392185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:25:20.437 [2024-11-26 07:34:48.392192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:25:20.437 [2024-11-26 07:34:48.392198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:25:20.437 [2024-11-26 07:34:48.392205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:25:20.437 [2024-11-26 07:34:48.392211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:25:20.437 [2024-11-26 07:34:48.392219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:25:20.437 [2024-11-26 07:34:48.392226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:25:20.437 [2024-11-26 07:34:48.392233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:25:20.437 [2024-11-26 07:34:48.392240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:25:20.698 07:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1527701 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1527701 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1527701 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.638 rmmod nvme_tcp 00:25:21.638 rmmod nvme_fabrics 00:25:21.638 rmmod nvme_keyring 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1527313 ']' 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1527313 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1527313 ']' 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1527313 00:25:21.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1527313) - No such process 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1527313 is not found' 00:25:21.638 Process with pid 1527313 is not found 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.638 07:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.183 00:25:24.183 real 0m7.741s 00:25:24.183 user 0m18.676s 00:25:24.183 sys 0m1.286s 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:25:24.183 ************************************ 00:25:24.183 END TEST nvmf_shutdown_tc3 00:25:24.183 ************************************ 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:24.183 ************************************ 00:25:24.183 START TEST nvmf_shutdown_tc4 00:25:24.183 ************************************ 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:24.183 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:24.184 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:24.184 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:24.184 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:24.184 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.184 07:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:25:24.184 00:25:24.184 --- 10.0.0.2 ping statistics --- 00:25:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.184 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:24.184 00:25:24.184 --- 10.0.0.1 ping statistics --- 00:25:24.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.184 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1529026 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1529026 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1529026 ']' 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.184 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.185 07:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 [2024-11-26 07:34:52.308404] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:24.445 [2024-11-26 07:34:52.308469] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.445 [2024-11-26 07:34:52.414307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.445 [2024-11-26 07:34:52.467047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.445 [2024-11-26 07:34:52.467101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.445 [2024-11-26 07:34:52.467110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.445 [2024-11-26 07:34:52.467118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.445 [2024-11-26 07:34:52.467124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.445 [2024-11-26 07:34:52.469152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.445 [2024-11-26 07:34:52.469320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.445 [2024-11-26 07:34:52.469633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:24.445 [2024-11-26 07:34:52.469636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.386 [2024-11-26 07:34:53.169206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.386 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.386 Malloc1 00:25:25.386 [2024-11-26 07:34:53.278348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.386 Malloc2 00:25:25.386 Malloc3 00:25:25.386 Malloc4 00:25:25.386 Malloc5 00:25:25.386 Malloc6 00:25:25.654 Malloc7 00:25:25.654 Malloc8 00:25:25.654 Malloc9 00:25:25.654 Malloc10 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1529253 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:25:25.654 07:34:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:25:25.917 [2024-11-26 07:34:53.763755] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1529026 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1529026 ']' 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1529026 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529026 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529026' 00:25:31.209 killing process with pid 1529026 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1529026 00:25:31.209 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1529026 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 [2024-11-26 07:34:58.761774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4cd0 is same with the state(6) to be set 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.761817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4cd0 is same with the state(6) to be set 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.761823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4cd0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.761829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4cd0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.761833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4cd0 is same with the state(6) to be set 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.762116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf51a0 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.209 starting I/O failed: -6 00:25:31.209 starting I/O failed: -6 00:25:31.209 [2024-11-26 07:34:58.762367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5670 is same with the state(6) to be set 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 starting I/O failed: -6 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.762608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.762630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with starting I/O failed: -6 00:25:31.209 the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.209 Write completed with error (sct=0, sc=8) 00:25:31.209 [2024-11-26 07:34:58.762648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.209 [2024-11-26 07:34:58.762653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with starting I/O failed: -6 00:25:31.209 the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.762659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.762664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4800 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.763120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.763138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf6010 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 [2024-11-26 07:34:58.763537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf64e0 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.763547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf64e0 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf64e0 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.763791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 [2024-11-26 07:34:58.763813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.763818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 starting I/O failed: -6 00:25:31.210 [2024-11-26 07:34:58.763823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf69b0 is same with the state(6) to be set 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 Write completed with error (sct=0, sc=8) 00:25:31.210 starting I/O failed: -6 00:25:31.210 [2024-11-26 07:34:58.764101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.210 [2024-11-26 07:34:58.764195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.764209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.764214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.764220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.210 [2024-11-26 07:34:58.764224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.764229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.764235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.764240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf5b40 is same with the state(6) to be set 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 [2024-11-26 07:34:58.765910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.211 NVMe io qpair process completion error 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 [2024-11-26 07:34:58.766477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 [2024-11-26 07:34:58.766492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.766498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.766503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.766508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 [2024-11-26 07:34:58.766513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.766518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 [2024-11-26 07:34:58.766523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6480 is same with the state(6) to be set 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 starting I/O failed: -6 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.211 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 [2024-11-26 07:34:58.766936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 [2024-11-26 07:34:58.767823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 [2024-11-26 07:34:58.768737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.212 starting I/O failed: -6 00:25:31.212 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 [2024-11-26 07:34:58.770109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.213 NVMe io qpair process completion error 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 [2024-11-26 07:34:58.771265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.213 starting I/O failed: -6 00:25:31.213 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 [2024-11-26 07:34:58.772056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 [2024-11-26 07:34:58.772989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.214 Write completed with error (sct=0, sc=8) 00:25:31.214 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 [2024-11-26 07:34:58.774683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.215 NVMe io qpair process completion error 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 [2024-11-26 07:34:58.775902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.215 Write completed with error (sct=0, sc=8) 00:25:31.215 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 [2024-11-26 07:34:58.776704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 [2024-11-26 07:34:58.777635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.216 Write completed with error (sct=0, sc=8) 00:25:31.216 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 [2024-11-26 07:34:58.780607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.217 NVMe io qpair process completion error 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 [2024-11-26 07:34:58.781859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 [2024-11-26 07:34:58.782702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.217 starting I/O failed: -6 00:25:31.217 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 [2024-11-26 07:34:58.783636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 [2024-11-26 07:34:58.785266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.218 NVMe io qpair process completion error 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 Write completed with error (sct=0, sc=8) 00:25:31.218 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 [2024-11-26 07:34:58.786486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 [2024-11-26 07:34:58.787303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 [2024-11-26 07:34:58.788235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.219 starting I/O failed: -6 00:25:31.219 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 [2024-11-26 07:34:58.791112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.220 NVMe io qpair process completion error 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 [2024-11-26 07:34:58.792243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.220 starting I/O failed: -6 00:25:31.220 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 [2024-11-26 07:34:58.793060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 [2024-11-26 07:34:58.793990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.221 Write completed with error (sct=0, sc=8) 00:25:31.221 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 [2024-11-26 07:34:58.795466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.222 NVMe io qpair process completion error 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 [2024-11-26 07:34:58.796748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.222 starting I/O failed: -6 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 [2024-11-26 07:34:58.797703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.222 Write completed with error (sct=0, sc=8) 00:25:31.222 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 [2024-11-26 07:34:58.798632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 [2024-11-26 07:34:58.802692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.223 NVMe io qpair process completion error 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 Write completed with error (sct=0, sc=8) 00:25:31.223 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 [2024-11-26 07:34:58.803630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.224 starting I/O failed: -6 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 [2024-11-26 07:34:58.804626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 [2024-11-26 07:34:58.805553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.224 Write completed with error (sct=0, sc=8) 00:25:31.224 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 [2024-11-26 07:34:58.807233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.225 NVMe io qpair process completion error 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 [2024-11-26 07:34:58.808381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.225 starting I/O failed: -6 00:25:31.225 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 [2024-11-26 07:34:58.809204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 [2024-11-26 07:34:58.810126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.226 starting I/O failed: -6 00:25:31.226 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 Write completed with error (sct=0, sc=8) 00:25:31.227 starting I/O failed: -6 00:25:31.227 [2024-11-26 07:34:58.813316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:31.227 NVMe io qpair process completion error 00:25:31.227 Initializing NVMe Controllers 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:25:31.227 Controller IO queue size 128, less than required. 00:25:31.227 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:25:31.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:25:31.227 Initialization complete. Launching workers. 00:25:31.227 ======================================================== 00:25:31.227 Latency(us) 00:25:31.227 Device Information : IOPS MiB/s Average min max 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1924.15 82.68 66545.71 574.64 118699.97 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1904.81 81.85 67240.11 852.32 123216.84 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1878.87 80.73 68185.75 842.40 122062.46 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1909.69 82.06 67108.20 803.89 120035.30 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1897.15 81.52 67592.51 720.52 122858.37 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1880.15 80.79 68228.38 848.83 124983.21 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1851.03 79.54 69338.21 677.03 121589.34 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1930.74 82.96 66498.17 733.51 120930.22 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1820.20 78.21 70592.25 677.78 123647.99 00:25:31.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1896.73 81.50 67766.29 855.54 134785.77 00:25:31.227 ======================================================== 00:25:31.227 Total : 18893.52 811.83 67889.33 574.64 134785.77 00:25:31.227 00:25:31.227 [2024-11-26 07:34:58.819441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x970ae0 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x970720 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96f410 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x970900 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96ebc0 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96f740 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96eef0 is same with the state(6) to be set 00:25:31.227 [2024-11-26 07:34:58.819668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96fa70 is same with the state(6) to be set 00:25:31.228 [2024-11-26 07:34:58.819697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e560 is same with the state(6) to be set 00:25:31.228 [2024-11-26 07:34:58.819727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96e890 is same with the state(6) to be set 00:25:31.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:25:31.228 07:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1529253 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1529253 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1529253 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:25:32.170 07:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.170 rmmod nvme_tcp 00:25:32.170 rmmod nvme_fabrics 00:25:32.170 rmmod nvme_keyring 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1529026 ']' 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1529026 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1529026 ']' 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1529026 00:25:32.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1529026) - No such process 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1529026 is not found' 00:25:32.170 Process with pid 1529026 is not found 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.170 07:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.082 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.082 00:25:34.082 real 0m10.295s 00:25:34.082 user 0m27.891s 00:25:34.082 sys 0m4.097s 00:25:34.082 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.082 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:25:34.082 ************************************ 00:25:34.082 END TEST nvmf_shutdown_tc4 00:25:34.082 ************************************ 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:25:34.342 00:25:34.342 real 0m43.411s 00:25:34.342 user 1m44.534s 00:25:34.342 sys 0m14.121s 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:34.342 ************************************ 00:25:34.342 END TEST nvmf_shutdown 00:25:34.342 ************************************ 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:34.342 ************************************ 00:25:34.342 START TEST nvmf_nsid 00:25:34.342 ************************************ 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:25:34.342 * Looking for test storage... 00:25:34.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:25:34.342 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.603 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:34.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.604 --rc genhtml_branch_coverage=1 00:25:34.604 --rc genhtml_function_coverage=1 00:25:34.604 --rc genhtml_legend=1 00:25:34.604 --rc geninfo_all_blocks=1 00:25:34.604 --rc geninfo_unexecuted_blocks=1 00:25:34.604 00:25:34.604 ' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:34.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.604 --rc genhtml_branch_coverage=1 00:25:34.604 --rc genhtml_function_coverage=1 00:25:34.604 --rc genhtml_legend=1 00:25:34.604 --rc geninfo_all_blocks=1 00:25:34.604 --rc geninfo_unexecuted_blocks=1 00:25:34.604 00:25:34.604 ' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:34.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.604 --rc genhtml_branch_coverage=1 00:25:34.604 --rc genhtml_function_coverage=1 00:25:34.604 --rc genhtml_legend=1 00:25:34.604 --rc geninfo_all_blocks=1 00:25:34.604 --rc geninfo_unexecuted_blocks=1 00:25:34.604 00:25:34.604 ' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:34.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.604 --rc genhtml_branch_coverage=1 00:25:34.604 --rc genhtml_function_coverage=1 00:25:34.604 --rc genhtml_legend=1 00:25:34.604 --rc geninfo_all_blocks=1 00:25:34.604 --rc geninfo_unexecuted_blocks=1 00:25:34.604 00:25:34.604 ' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:34.604 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:34.605 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.605 07:35:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:42.750 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.750 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:42.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:42.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:42.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.751 07:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:25:42.751 00:25:42.751 --- 10.0.0.2 ping statistics --- 00:25:42.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.751 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:25:42.751 00:25:42.751 --- 10.0.0.1 ping statistics --- 00:25:42.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.751 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1534731 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1534731 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1534731 ']' 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.751 07:35:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:42.751 [2024-11-26 07:35:10.191943] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:42.751 [2024-11-26 07:35:10.192017] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.751 [2024-11-26 07:35:10.292318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.751 [2024-11-26 07:35:10.344376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.751 [2024-11-26 07:35:10.344428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.751 [2024-11-26 07:35:10.344436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.751 [2024-11-26 07:35:10.344444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.751 [2024-11-26 07:35:10.344450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.751 [2024-11-26 07:35:10.345221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1534917 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=10cd51dc-e377-4045-99af-a22664050d39 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b6b865b2-41b3-427e-9663-cc0ce4a38cfe 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=77194cfd-1db3-4c99-a1a7-dafa78a97d6d 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.013 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:43.275 null0 00:25:43.275 null1 00:25:43.275 [2024-11-26 07:35:11.125179] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:43.275 [2024-11-26 07:35:11.125246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534917 ] 00:25:43.275 null2 00:25:43.275 [2024-11-26 07:35:11.131079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.275 [2024-11-26 07:35:11.155347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1534917 /var/tmp/tgt2.sock 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1534917 ']' 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:25:43.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.275 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:43.275 [2024-11-26 07:35:11.217726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.275 [2024-11-26 07:35:11.269867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.537 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.537 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:25:43.537 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:25:43.799 [2024-11-26 07:35:11.830575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.799 [2024-11-26 07:35:11.846761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:25:43.799 nvme0n1 nvme0n2 00:25:43.799 nvme1n1 00:25:44.060 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:25:44.060 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:25:44.060 07:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:25:45.449 07:35:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 10cd51dc-e377-4045-99af-a22664050d39 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=10cd51dce377404599afa22664050d39 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 10CD51DCE377404599AFA22664050D39 00:25:46.391 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 10CD51DCE377404599AFA22664050D39 == \1\0\C\D\5\1\D\C\E\3\7\7\4\0\4\5\9\9\A\F\A\2\2\6\6\4\0\5\0\D\3\9 ]] 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b6b865b2-41b3-427e-9663-cc0ce4a38cfe 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:25:46.392 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b6b865b241b3427e9663cc0ce4a38cfe 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B6B865B241B3427E9663CC0CE4A38CFE 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B6B865B241B3427E9663CC0CE4A38CFE == \B\6\B\8\6\5\B\2\4\1\B\3\4\2\7\E\9\6\6\3\C\C\0\C\E\4\A\3\8\C\F\E ]] 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 77194cfd-1db3-4c99-a1a7-dafa78a97d6d 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=77194cfd1db34c99a1a7dafa78a97d6d 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 77194CFD1DB34C99A1A7DAFA78A97D6D 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 77194CFD1DB34C99A1A7DAFA78A97D6D == \7\7\1\9\4\C\F\D\1\D\B\3\4\C\9\9\A\1\A\7\D\A\F\A\7\8\A\9\7\D\6\D ]] 00:25:46.652 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1534917 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1534917 ']' 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1534917 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1534917 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1534917' 00:25:46.995 killing process with pid 1534917 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1534917 00:25:46.995 07:35:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1534917 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.338 rmmod nvme_tcp 00:25:47.338 rmmod nvme_fabrics 00:25:47.338 rmmod nvme_keyring 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1534731 ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1534731 ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1534731' 00:25:47.338 killing process with pid 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1534731 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.338 07:35:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.883 00:25:49.883 real 0m15.078s 00:25:49.883 user 0m11.469s 00:25:49.883 sys 0m7.001s 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:25:49.883 ************************************ 00:25:49.883 END TEST nvmf_nsid 00:25:49.883 ************************************ 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:25:49.883 00:25:49.883 real 13m5.145s 00:25:49.883 user 27m16.214s 00:25:49.883 sys 3m55.959s 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:49.883 07:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:49.883 ************************************ 00:25:49.883 END TEST nvmf_target_extra 00:25:49.883 ************************************ 00:25:49.883 07:35:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:49.883 07:35:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.883 07:35:17 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.883 07:35:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:49.883 ************************************ 00:25:49.883 START TEST nvmf_host 00:25:49.883 ************************************ 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:25:49.883 * Looking for test storage... 00:25:49.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.883 --rc genhtml_branch_coverage=1 00:25:49.883 --rc genhtml_function_coverage=1 00:25:49.883 --rc genhtml_legend=1 00:25:49.883 --rc geninfo_all_blocks=1 00:25:49.883 --rc geninfo_unexecuted_blocks=1 00:25:49.883 00:25:49.883 ' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.883 --rc genhtml_branch_coverage=1 00:25:49.883 --rc genhtml_function_coverage=1 00:25:49.883 --rc genhtml_legend=1 00:25:49.883 --rc geninfo_all_blocks=1 00:25:49.883 --rc geninfo_unexecuted_blocks=1 00:25:49.883 00:25:49.883 ' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.883 --rc genhtml_branch_coverage=1 00:25:49.883 --rc genhtml_function_coverage=1 00:25:49.883 --rc genhtml_legend=1 00:25:49.883 --rc geninfo_all_blocks=1 00:25:49.883 --rc geninfo_unexecuted_blocks=1 00:25:49.883 00:25:49.883 ' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:49.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:49.883 --rc genhtml_branch_coverage=1 00:25:49.883 --rc genhtml_function_coverage=1 00:25:49.883 --rc genhtml_legend=1 00:25:49.883 --rc geninfo_all_blocks=1 00:25:49.883 --rc geninfo_unexecuted_blocks=1 00:25:49.883 00:25:49.883 ' 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.883 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:49.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.884 ************************************ 00:25:49.884 START TEST nvmf_multicontroller 00:25:49.884 ************************************ 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:49.884 * Looking for test storage... 00:25:49.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:49.884 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:50.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.146 --rc genhtml_branch_coverage=1 00:25:50.146 --rc genhtml_function_coverage=1 00:25:50.146 --rc genhtml_legend=1 00:25:50.146 --rc geninfo_all_blocks=1 00:25:50.146 --rc geninfo_unexecuted_blocks=1 00:25:50.146 00:25:50.146 ' 00:25:50.146 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.147 07:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.147 07:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:58.295 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:58.295 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.295 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:58.296 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:58.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:25:58.296 00:25:58.296 --- 10.0.0.2 ping statistics --- 00:25:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.296 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:25:58.296 00:25:58.296 --- 10.0.0.1 ping statistics --- 00:25:58.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.296 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1540023 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1540023 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1540023 ']' 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.296 07:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.296 [2024-11-26 07:35:25.665059] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:25:58.296 [2024-11-26 07:35:25.665126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.296 [2024-11-26 07:35:25.766893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:58.296 [2024-11-26 07:35:25.819287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.296 [2024-11-26 07:35:25.819334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.296 [2024-11-26 07:35:25.819344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.296 [2024-11-26 07:35:25.819351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.296 [2024-11-26 07:35:25.819358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.296 [2024-11-26 07:35:25.821461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.296 [2024-11-26 07:35:25.821623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.296 [2024-11-26 07:35:25.821624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.558 [2024-11-26 07:35:26.528115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.558 Malloc0 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.558 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.559 [2024-11-26 07:35:26.604687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.559 [2024-11-26 07:35:26.616512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.559 Malloc1 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.559 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1540321 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1540321 /var/tmp/bdevperf.sock 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1540321 ']' 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:58.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.819 07:35:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 NVMe0n1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.762 1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 request: 00:25:59.762 { 00:25:59.762 "name": "NVMe0", 00:25:59.762 "trtype": "tcp", 00:25:59.762 "traddr": "10.0.0.2", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "4420", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.762 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:59.762 "hostaddr": "10.0.0.1", 00:25:59.762 "prchk_reftag": false, 00:25:59.762 "prchk_guard": false, 00:25:59.762 "hdgst": false, 00:25:59.762 "ddgst": false, 00:25:59.762 "allow_unrecognized_csi": false, 00:25:59.762 "method": "bdev_nvme_attach_controller", 00:25:59.762 "req_id": 1 00:25:59.762 } 00:25:59.762 Got JSON-RPC error response 00:25:59.762 response: 00:25:59.762 { 00:25:59.762 "code": -114, 00:25:59.762 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:59.762 } 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:59.762 request: 00:25:59.762 { 00:25:59.762 "name": "NVMe0", 00:25:59.762 "trtype": "tcp", 00:25:59.762 "traddr": "10.0.0.2", 00:25:59.762 "adrfam": "ipv4", 00:25:59.762 "trsvcid": "4420", 00:25:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.762 "hostaddr": "10.0.0.1", 00:25:59.762 "prchk_reftag": false, 00:25:59.762 "prchk_guard": false, 00:25:59.762 "hdgst": false, 00:25:59.762 "ddgst": false, 00:25:59.762 "allow_unrecognized_csi": false, 00:25:59.762 "method": "bdev_nvme_attach_controller", 00:25:59.762 "req_id": 1 00:25:59.762 } 00:25:59.762 Got JSON-RPC error response 00:25:59.762 response: 00:25:59.762 { 00:25:59.762 "code": -114, 00:25:59.762 "message": "A controller named NVMe0 already exists with the specified network path" 00:25:59.762 } 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:25:59.762 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.024 request: 00:26:00.024 { 00:26:00.024 "name": "NVMe0", 00:26:00.024 "trtype": "tcp", 00:26:00.024 "traddr": "10.0.0.2", 00:26:00.024 "adrfam": "ipv4", 00:26:00.024 "trsvcid": "4420", 00:26:00.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.024 "hostaddr": "10.0.0.1", 00:26:00.024 "prchk_reftag": false, 00:26:00.024 "prchk_guard": false, 00:26:00.024 "hdgst": false, 00:26:00.024 "ddgst": false, 00:26:00.024 "multipath": "disable", 00:26:00.024 "allow_unrecognized_csi": false, 00:26:00.024 "method": "bdev_nvme_attach_controller", 00:26:00.024 "req_id": 1 00:26:00.024 } 00:26:00.024 Got JSON-RPC error response 00:26:00.024 response: 00:26:00.024 { 00:26:00.024 "code": -114, 00:26:00.024 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:00.024 } 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.024 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.024 request: 00:26:00.024 { 00:26:00.024 "name": "NVMe0", 00:26:00.024 "trtype": "tcp", 00:26:00.024 "traddr": "10.0.0.2", 00:26:00.024 "adrfam": "ipv4", 00:26:00.024 "trsvcid": "4420", 00:26:00.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.024 "hostaddr": "10.0.0.1", 00:26:00.024 "prchk_reftag": false, 00:26:00.024 "prchk_guard": false, 00:26:00.024 "hdgst": false, 00:26:00.024 "ddgst": false, 00:26:00.024 "multipath": "failover", 00:26:00.024 "allow_unrecognized_csi": false, 00:26:00.024 "method": "bdev_nvme_attach_controller", 00:26:00.024 "req_id": 1 00:26:00.024 } 00:26:00.024 Got JSON-RPC error response 00:26:00.024 response: 00:26:00.024 { 00:26:00.024 "code": -114, 00:26:00.024 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:00.024 } 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.025 07:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.025 NVMe0n1 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.025 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.286 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:00.286 07:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.672 { 00:26:01.672 "results": [ 00:26:01.672 { 00:26:01.672 "job": "NVMe0n1", 00:26:01.672 "core_mask": "0x1", 00:26:01.672 "workload": "write", 00:26:01.672 "status": "finished", 00:26:01.672 "queue_depth": 128, 00:26:01.672 "io_size": 4096, 00:26:01.672 "runtime": 1.006428, 00:26:01.672 "iops": 19892.133366718732, 00:26:01.672 "mibps": 77.70364596374505, 00:26:01.672 "io_failed": 0, 00:26:01.672 "io_timeout": 0, 00:26:01.672 "avg_latency_us": 6418.961902097902, 00:26:01.672 "min_latency_us": 3932.16, 00:26:01.672 "max_latency_us": 12834.133333333333 00:26:01.672 } 00:26:01.672 ], 00:26:01.672 "core_count": 1 00:26:01.672 } 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1540321 ']' 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540321' 00:26:01.672 killing process with pid 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1540321 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:01.672 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:01.672 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:01.672 [2024-11-26 07:35:26.745066] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:01.672 [2024-11-26 07:35:26.745141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540321 ] 00:26:01.672 [2024-11-26 07:35:26.838682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.672 [2024-11-26 07:35:26.891481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.672 [2024-11-26 07:35:28.203232] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name d5a47edb-0c7c-41de-b0e7-8abb152dd83e already exists 00:26:01.672 [2024-11-26 07:35:28.203271] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:d5a47edb-0c7c-41de-b0e7-8abb152dd83e alias for bdev NVMe1n1 00:26:01.672 [2024-11-26 07:35:28.203280] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:01.672 Running I/O for 1 seconds... 00:26:01.672 19828.00 IOPS, 77.45 MiB/s 00:26:01.672 Latency(us) 00:26:01.672 [2024-11-26T06:35:29.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.672 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:01.672 NVMe0n1 : 1.01 19892.13 77.70 0.00 0.00 6418.96 3932.16 12834.13 00:26:01.672 [2024-11-26T06:35:29.770Z] =================================================================================================================== 00:26:01.672 [2024-11-26T06:35:29.771Z] Total : 19892.13 77.70 0.00 0.00 6418.96 3932.16 12834.13 00:26:01.673 Received shutdown signal, test time was about 1.000000 seconds 00:26:01.673 00:26:01.673 Latency(us) 00:26:01.673 [2024-11-26T06:35:29.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.673 [2024-11-26T06:35:29.771Z] =================================================================================================================== 00:26:01.673 [2024-11-26T06:35:29.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.673 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.673 rmmod nvme_tcp 00:26:01.673 rmmod nvme_fabrics 00:26:01.673 rmmod nvme_keyring 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1540023 ']' 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1540023 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1540023 ']' 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1540023 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540023 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540023' 00:26:01.673 killing process with pid 1540023 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1540023 00:26:01.673 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1540023 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.934 07:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.481 00:26:04.481 real 0m14.162s 00:26:04.481 user 0m17.503s 00:26:04.481 sys 0m6.598s 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 ************************************ 00:26:04.481 END TEST nvmf_multicontroller 00:26:04.481 ************************************ 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.481 07:35:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.481 ************************************ 00:26:04.481 START TEST nvmf_aer 00:26:04.481 ************************************ 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:04.481 * Looking for test storage... 00:26:04.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.481 --rc genhtml_branch_coverage=1 00:26:04.481 --rc genhtml_function_coverage=1 00:26:04.481 --rc genhtml_legend=1 00:26:04.481 --rc geninfo_all_blocks=1 00:26:04.481 --rc geninfo_unexecuted_blocks=1 00:26:04.481 00:26:04.481 ' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.481 --rc genhtml_branch_coverage=1 00:26:04.481 --rc genhtml_function_coverage=1 00:26:04.481 --rc genhtml_legend=1 00:26:04.481 --rc geninfo_all_blocks=1 00:26:04.481 --rc geninfo_unexecuted_blocks=1 00:26:04.481 00:26:04.481 ' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.481 --rc genhtml_branch_coverage=1 00:26:04.481 --rc genhtml_function_coverage=1 00:26:04.481 --rc genhtml_legend=1 00:26:04.481 --rc geninfo_all_blocks=1 00:26:04.481 --rc geninfo_unexecuted_blocks=1 00:26:04.481 00:26:04.481 ' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.481 --rc genhtml_branch_coverage=1 00:26:04.481 --rc genhtml_function_coverage=1 00:26:04.481 --rc genhtml_legend=1 00:26:04.481 --rc geninfo_all_blocks=1 00:26:04.481 --rc geninfo_unexecuted_blocks=1 00:26:04.481 00:26:04.481 ' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.481 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.482 07:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.625 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:26:12.626 00:26:12.626 --- 10.0.0.2 ping statistics --- 00:26:12.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.626 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:26:12.626 00:26:12.626 --- 10.0.0.1 ping statistics --- 00:26:12.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.626 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1545068 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1545068 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1545068 ']' 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.626 07:35:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.626 [2024-11-26 07:35:39.859450] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:12.626 [2024-11-26 07:35:39.859518] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.626 [2024-11-26 07:35:39.960339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.626 [2024-11-26 07:35:40.016334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.626 [2024-11-26 07:35:40.016388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.626 [2024-11-26 07:35:40.016398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.626 [2024-11-26 07:35:40.016408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.626 [2024-11-26 07:35:40.016416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.626 [2024-11-26 07:35:40.018448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.626 [2024-11-26 07:35:40.018614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.626 [2024-11-26 07:35:40.018776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.626 [2024-11-26 07:35:40.018777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.626 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.626 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:12.627 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:12.627 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.627 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 [2024-11-26 07:35:40.728218] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 Malloc0 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 [2024-11-26 07:35:40.806205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:12.888 [ 00:26:12.888 { 00:26:12.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:12.888 "subtype": "Discovery", 00:26:12.888 "listen_addresses": [], 00:26:12.888 "allow_any_host": true, 00:26:12.888 "hosts": [] 00:26:12.888 }, 00:26:12.888 { 00:26:12.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.888 "subtype": "NVMe", 00:26:12.888 "listen_addresses": [ 00:26:12.888 { 00:26:12.888 "trtype": "TCP", 00:26:12.888 "adrfam": "IPv4", 00:26:12.888 "traddr": "10.0.0.2", 00:26:12.888 "trsvcid": "4420" 00:26:12.888 } 00:26:12.888 ], 00:26:12.888 "allow_any_host": true, 00:26:12.888 "hosts": [], 00:26:12.888 "serial_number": "SPDK00000000000001", 00:26:12.888 "model_number": "SPDK bdev Controller", 00:26:12.888 "max_namespaces": 2, 00:26:12.888 "min_cntlid": 1, 00:26:12.888 "max_cntlid": 65519, 00:26:12.888 "namespaces": [ 00:26:12.888 { 00:26:12.888 "nsid": 1, 00:26:12.888 "bdev_name": "Malloc0", 00:26:12.888 "name": "Malloc0", 00:26:12.888 "nguid": "7D1DE4EF35C9467592071AD486535247", 00:26:12.888 "uuid": "7d1de4ef-35c9-4675-9207-1ad486535247" 00:26:12.888 } 00:26:12.888 ] 00:26:12.888 } 00:26:12.888 ] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1545250 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:12.888 07:35:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 3 -lt 200 ']' 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=4 00:26:13.150 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 Malloc1 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 Asynchronous Event Request test 00:26:13.412 Attaching to 10.0.0.2 00:26:13.412 Attached to 10.0.0.2 00:26:13.412 Registering asynchronous event callbacks... 00:26:13.412 Starting namespace attribute notice tests for all controllers... 00:26:13.412 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:13.412 aer_cb - Changed Namespace 00:26:13.412 Cleaning up... 00:26:13.412 [ 00:26:13.412 { 00:26:13.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:13.412 "subtype": "Discovery", 00:26:13.412 "listen_addresses": [], 00:26:13.412 "allow_any_host": true, 00:26:13.412 "hosts": [] 00:26:13.412 }, 00:26:13.412 { 00:26:13.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.412 "subtype": "NVMe", 00:26:13.412 "listen_addresses": [ 00:26:13.412 { 00:26:13.412 "trtype": "TCP", 00:26:13.412 "adrfam": "IPv4", 00:26:13.412 "traddr": "10.0.0.2", 00:26:13.412 "trsvcid": "4420" 00:26:13.412 } 00:26:13.412 ], 00:26:13.412 "allow_any_host": true, 00:26:13.412 "hosts": [], 00:26:13.412 "serial_number": "SPDK00000000000001", 00:26:13.412 "model_number": "SPDK bdev Controller", 00:26:13.412 "max_namespaces": 2, 00:26:13.412 "min_cntlid": 1, 00:26:13.412 "max_cntlid": 65519, 00:26:13.412 "namespaces": [ 00:26:13.412 { 00:26:13.412 "nsid": 1, 00:26:13.412 "bdev_name": "Malloc0", 00:26:13.412 "name": "Malloc0", 00:26:13.412 "nguid": "7D1DE4EF35C9467592071AD486535247", 00:26:13.412 "uuid": "7d1de4ef-35c9-4675-9207-1ad486535247" 00:26:13.412 }, 00:26:13.412 { 00:26:13.412 "nsid": 2, 00:26:13.412 "bdev_name": "Malloc1", 00:26:13.412 "name": "Malloc1", 00:26:13.412 "nguid": "C404A7B8469B4980A0C5AE29D0A22428", 00:26:13.412 "uuid": "c404a7b8-469b-4980-a0c5-ae29d0a22428" 00:26:13.412 } 00:26:13.412 ] 00:26:13.412 } 00:26:13.412 ] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1545250 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.412 rmmod nvme_tcp 00:26:13.412 rmmod nvme_fabrics 00:26:13.412 rmmod nvme_keyring 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1545068 ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1545068 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1545068 ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1545068 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:13.412 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1545068 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1545068' 00:26:13.674 killing process with pid 1545068 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1545068 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1545068 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.674 07:35:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.222 07:35:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:16.222 00:26:16.222 real 0m11.755s 00:26:16.222 user 0m8.953s 00:26:16.222 sys 0m6.233s 00:26:16.222 07:35:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:16.222 07:35:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:16.222 ************************************ 00:26:16.223 END TEST nvmf_aer 00:26:16.223 ************************************ 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.223 ************************************ 00:26:16.223 START TEST nvmf_async_init 00:26:16.223 ************************************ 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:16.223 * Looking for test storage... 00:26:16.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:16.223 07:35:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:16.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.223 --rc genhtml_branch_coverage=1 00:26:16.223 --rc genhtml_function_coverage=1 00:26:16.223 --rc genhtml_legend=1 00:26:16.223 --rc geninfo_all_blocks=1 00:26:16.223 --rc geninfo_unexecuted_blocks=1 00:26:16.223 00:26:16.223 ' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:16.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.223 --rc genhtml_branch_coverage=1 00:26:16.223 --rc genhtml_function_coverage=1 00:26:16.223 --rc genhtml_legend=1 00:26:16.223 --rc geninfo_all_blocks=1 00:26:16.223 --rc geninfo_unexecuted_blocks=1 00:26:16.223 00:26:16.223 ' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:16.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.223 --rc genhtml_branch_coverage=1 00:26:16.223 --rc genhtml_function_coverage=1 00:26:16.223 --rc genhtml_legend=1 00:26:16.223 --rc geninfo_all_blocks=1 00:26:16.223 --rc geninfo_unexecuted_blocks=1 00:26:16.223 00:26:16.223 ' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:16.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:16.223 --rc genhtml_branch_coverage=1 00:26:16.223 --rc genhtml_function_coverage=1 00:26:16.223 --rc genhtml_legend=1 00:26:16.223 --rc geninfo_all_blocks=1 00:26:16.223 --rc geninfo_unexecuted_blocks=1 00:26:16.223 00:26:16.223 ' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.223 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:16.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a40747068ecc49c781ff79d146324dc2 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:16.224 07:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:24.366 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:24.367 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:24.367 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:24.367 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:24.367 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:24.367 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:24.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:26:24.368 00:26:24.368 --- 10.0.0.2 ping statistics --- 00:26:24.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.368 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:24.368 00:26:24.368 --- 10.0.0.1 ping statistics --- 00:26:24.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.368 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1549552 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1549552 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1549552 ']' 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.368 07:35:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.368 [2024-11-26 07:35:51.708365] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:24.368 [2024-11-26 07:35:51.708431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.368 [2024-11-26 07:35:51.810780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.368 [2024-11-26 07:35:51.862884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.368 [2024-11-26 07:35:51.862939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.368 [2024-11-26 07:35:51.862949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.368 [2024-11-26 07:35:51.862957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.368 [2024-11-26 07:35:51.862963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.368 [2024-11-26 07:35:51.863737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 [2024-11-26 07:35:52.586413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 null0 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a40747068ecc49c781ff79d146324dc2 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.631 [2024-11-26 07:35:52.646781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.631 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.893 nvme0n1 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.893 [ 00:26:24.893 { 00:26:24.893 "name": "nvme0n1", 00:26:24.893 "aliases": [ 00:26:24.893 "a4074706-8ecc-49c7-81ff-79d146324dc2" 00:26:24.893 ], 00:26:24.893 "product_name": "NVMe disk", 00:26:24.893 "block_size": 512, 00:26:24.893 "num_blocks": 2097152, 00:26:24.893 "uuid": "a4074706-8ecc-49c7-81ff-79d146324dc2", 00:26:24.893 "numa_id": 0, 00:26:24.893 "assigned_rate_limits": { 00:26:24.893 "rw_ios_per_sec": 0, 00:26:24.893 "rw_mbytes_per_sec": 0, 00:26:24.893 "r_mbytes_per_sec": 0, 00:26:24.893 "w_mbytes_per_sec": 0 00:26:24.893 }, 00:26:24.893 "claimed": false, 00:26:24.893 "zoned": false, 00:26:24.893 "supported_io_types": { 00:26:24.893 "read": true, 00:26:24.893 "write": true, 00:26:24.893 "unmap": false, 00:26:24.893 "flush": true, 00:26:24.893 "reset": true, 00:26:24.893 "nvme_admin": true, 00:26:24.893 "nvme_io": true, 00:26:24.893 "nvme_io_md": false, 00:26:24.893 "write_zeroes": true, 00:26:24.893 "zcopy": false, 00:26:24.893 "get_zone_info": false, 00:26:24.893 "zone_management": false, 00:26:24.893 "zone_append": false, 00:26:24.893 "compare": true, 00:26:24.893 "compare_and_write": true, 00:26:24.893 "abort": true, 00:26:24.893 "seek_hole": false, 00:26:24.893 "seek_data": false, 00:26:24.893 "copy": true, 00:26:24.893 "nvme_iov_md": false 00:26:24.893 }, 00:26:24.893 "memory_domains": [ 00:26:24.893 { 00:26:24.893 "dma_device_id": "system", 00:26:24.893 "dma_device_type": 1 00:26:24.893 } 00:26:24.893 ], 00:26:24.893 "driver_specific": { 00:26:24.893 "nvme": [ 00:26:24.893 { 00:26:24.893 "trid": { 00:26:24.893 "trtype": "TCP", 00:26:24.893 "adrfam": "IPv4", 00:26:24.893 "traddr": "10.0.0.2", 00:26:24.893 "trsvcid": "4420", 00:26:24.893 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:24.893 }, 00:26:24.893 "ctrlr_data": { 00:26:24.893 "cntlid": 1, 00:26:24.893 "vendor_id": "0x8086", 00:26:24.893 "model_number": "SPDK bdev Controller", 00:26:24.893 "serial_number": "00000000000000000000", 00:26:24.893 "firmware_revision": "25.01", 00:26:24.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.893 "oacs": { 00:26:24.893 "security": 0, 00:26:24.893 "format": 0, 00:26:24.893 "firmware": 0, 00:26:24.893 "ns_manage": 0 00:26:24.893 }, 00:26:24.893 "multi_ctrlr": true, 00:26:24.893 "ana_reporting": false 00:26:24.893 }, 00:26:24.893 "vs": { 00:26:24.893 "nvme_version": "1.3" 00:26:24.893 }, 00:26:24.893 "ns_data": { 00:26:24.893 "id": 1, 00:26:24.893 "can_share": true 00:26:24.893 } 00:26:24.893 } 00:26:24.893 ], 00:26:24.893 "mp_policy": "active_passive" 00:26:24.893 } 00:26:24.893 } 00:26:24.893 ] 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.893 07:35:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:24.893 [2024-11-26 07:35:52.923268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:24.893 [2024-11-26 07:35:52.923352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1475ce0 (9): Bad file descriptor 00:26:25.155 [2024-11-26 07:35:53.055264] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 [ 00:26:25.155 { 00:26:25.155 "name": "nvme0n1", 00:26:25.155 "aliases": [ 00:26:25.155 "a4074706-8ecc-49c7-81ff-79d146324dc2" 00:26:25.155 ], 00:26:25.155 "product_name": "NVMe disk", 00:26:25.155 "block_size": 512, 00:26:25.155 "num_blocks": 2097152, 00:26:25.155 "uuid": "a4074706-8ecc-49c7-81ff-79d146324dc2", 00:26:25.155 "numa_id": 0, 00:26:25.155 "assigned_rate_limits": { 00:26:25.155 "rw_ios_per_sec": 0, 00:26:25.155 "rw_mbytes_per_sec": 0, 00:26:25.155 "r_mbytes_per_sec": 0, 00:26:25.155 "w_mbytes_per_sec": 0 00:26:25.155 }, 00:26:25.155 "claimed": false, 00:26:25.155 "zoned": false, 00:26:25.155 "supported_io_types": { 00:26:25.155 "read": true, 00:26:25.155 "write": true, 00:26:25.155 "unmap": false, 00:26:25.155 "flush": true, 00:26:25.155 "reset": true, 00:26:25.155 "nvme_admin": true, 00:26:25.155 "nvme_io": true, 00:26:25.155 "nvme_io_md": false, 00:26:25.155 "write_zeroes": true, 00:26:25.155 "zcopy": false, 00:26:25.155 "get_zone_info": false, 00:26:25.155 "zone_management": false, 00:26:25.155 "zone_append": false, 00:26:25.155 "compare": true, 00:26:25.155 "compare_and_write": true, 00:26:25.155 "abort": true, 00:26:25.155 "seek_hole": false, 00:26:25.155 "seek_data": false, 00:26:25.155 "copy": true, 00:26:25.155 "nvme_iov_md": false 00:26:25.155 }, 00:26:25.155 "memory_domains": [ 00:26:25.155 { 00:26:25.155 "dma_device_id": "system", 00:26:25.155 "dma_device_type": 1 00:26:25.155 } 00:26:25.155 ], 00:26:25.155 "driver_specific": { 00:26:25.155 "nvme": [ 00:26:25.155 { 00:26:25.155 "trid": { 00:26:25.155 "trtype": "TCP", 00:26:25.155 "adrfam": "IPv4", 00:26:25.155 "traddr": "10.0.0.2", 00:26:25.155 "trsvcid": "4420", 00:26:25.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:25.155 }, 00:26:25.155 "ctrlr_data": { 00:26:25.155 "cntlid": 2, 00:26:25.155 "vendor_id": "0x8086", 00:26:25.155 "model_number": "SPDK bdev Controller", 00:26:25.155 "serial_number": "00000000000000000000", 00:26:25.155 "firmware_revision": "25.01", 00:26:25.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:25.155 "oacs": { 00:26:25.155 "security": 0, 00:26:25.155 "format": 0, 00:26:25.155 "firmware": 0, 00:26:25.155 "ns_manage": 0 00:26:25.155 }, 00:26:25.155 "multi_ctrlr": true, 00:26:25.155 "ana_reporting": false 00:26:25.155 }, 00:26:25.155 "vs": { 00:26:25.155 "nvme_version": "1.3" 00:26:25.155 }, 00:26:25.155 "ns_data": { 00:26:25.155 "id": 1, 00:26:25.155 "can_share": true 00:26:25.155 } 00:26:25.155 } 00:26:25.155 ], 00:26:25.155 "mp_policy": "active_passive" 00:26:25.155 } 00:26:25.155 } 00:26:25.155 ] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.rcGjdFWnbz 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.rcGjdFWnbz 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.rcGjdFWnbz 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 [2024-11-26 07:35:53.143947] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:25.155 [2024-11-26 07:35:53.144108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 [2024-11-26 07:35:53.168024] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:25.155 nvme0n1 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.155 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.155 [ 00:26:25.155 { 00:26:25.155 "name": "nvme0n1", 00:26:25.155 "aliases": [ 00:26:25.155 "a4074706-8ecc-49c7-81ff-79d146324dc2" 00:26:25.156 ], 00:26:25.156 "product_name": "NVMe disk", 00:26:25.156 "block_size": 512, 00:26:25.156 "num_blocks": 2097152, 00:26:25.156 "uuid": "a4074706-8ecc-49c7-81ff-79d146324dc2", 00:26:25.156 "numa_id": 0, 00:26:25.156 "assigned_rate_limits": { 00:26:25.156 "rw_ios_per_sec": 0, 00:26:25.417 "rw_mbytes_per_sec": 0, 00:26:25.417 "r_mbytes_per_sec": 0, 00:26:25.417 "w_mbytes_per_sec": 0 00:26:25.417 }, 00:26:25.417 "claimed": false, 00:26:25.417 "zoned": false, 00:26:25.417 "supported_io_types": { 00:26:25.417 "read": true, 00:26:25.417 "write": true, 00:26:25.417 "unmap": false, 00:26:25.417 "flush": true, 00:26:25.417 "reset": true, 00:26:25.417 "nvme_admin": true, 00:26:25.417 "nvme_io": true, 00:26:25.417 "nvme_io_md": false, 00:26:25.417 "write_zeroes": true, 00:26:25.417 "zcopy": false, 00:26:25.417 "get_zone_info": false, 00:26:25.417 "zone_management": false, 00:26:25.417 "zone_append": false, 00:26:25.417 "compare": true, 00:26:25.417 "compare_and_write": true, 00:26:25.417 "abort": true, 00:26:25.417 "seek_hole": false, 00:26:25.417 "seek_data": false, 00:26:25.417 "copy": true, 00:26:25.417 "nvme_iov_md": false 00:26:25.417 }, 00:26:25.417 "memory_domains": [ 00:26:25.417 { 00:26:25.417 "dma_device_id": "system", 00:26:25.417 "dma_device_type": 1 00:26:25.417 } 00:26:25.417 ], 00:26:25.417 "driver_specific": { 00:26:25.417 "nvme": [ 00:26:25.417 { 00:26:25.417 "trid": { 00:26:25.417 "trtype": "TCP", 00:26:25.417 "adrfam": "IPv4", 00:26:25.417 "traddr": "10.0.0.2", 00:26:25.417 "trsvcid": "4421", 00:26:25.417 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:25.417 }, 00:26:25.417 "ctrlr_data": { 00:26:25.417 "cntlid": 3, 00:26:25.417 "vendor_id": "0x8086", 00:26:25.417 "model_number": "SPDK bdev Controller", 00:26:25.417 "serial_number": "00000000000000000000", 00:26:25.417 "firmware_revision": "25.01", 00:26:25.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:25.417 "oacs": { 00:26:25.417 "security": 0, 00:26:25.417 "format": 0, 00:26:25.417 "firmware": 0, 00:26:25.417 "ns_manage": 0 00:26:25.417 }, 00:26:25.417 "multi_ctrlr": true, 00:26:25.417 "ana_reporting": false 00:26:25.417 }, 00:26:25.417 "vs": { 00:26:25.417 "nvme_version": "1.3" 00:26:25.417 }, 00:26:25.417 "ns_data": { 00:26:25.417 "id": 1, 00:26:25.417 "can_share": true 00:26:25.417 } 00:26:25.417 } 00:26:25.417 ], 00:26:25.417 "mp_policy": "active_passive" 00:26:25.417 } 00:26:25.417 } 00:26:25.417 ] 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.rcGjdFWnbz 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:25.417 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:25.418 rmmod nvme_tcp 00:26:25.418 rmmod nvme_fabrics 00:26:25.418 rmmod nvme_keyring 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1549552 ']' 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1549552 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1549552 ']' 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1549552 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1549552 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1549552' 00:26:25.418 killing process with pid 1549552 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1549552 00:26:25.418 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1549552 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:25.679 07:35:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.594 07:35:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:27.594 00:26:27.594 real 0m11.790s 00:26:27.594 user 0m4.256s 00:26:27.594 sys 0m6.125s 00:26:27.594 07:35:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.594 07:35:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:27.594 ************************************ 00:26:27.594 END TEST nvmf_async_init 00:26:27.594 ************************************ 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.855 ************************************ 00:26:27.855 START TEST dma 00:26:27.855 ************************************ 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:27.855 * Looking for test storage... 00:26:27.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:27.855 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.117 --rc genhtml_branch_coverage=1 00:26:28.117 --rc genhtml_function_coverage=1 00:26:28.117 --rc genhtml_legend=1 00:26:28.117 --rc geninfo_all_blocks=1 00:26:28.117 --rc geninfo_unexecuted_blocks=1 00:26:28.117 00:26:28.117 ' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.117 --rc genhtml_branch_coverage=1 00:26:28.117 --rc genhtml_function_coverage=1 00:26:28.117 --rc genhtml_legend=1 00:26:28.117 --rc geninfo_all_blocks=1 00:26:28.117 --rc geninfo_unexecuted_blocks=1 00:26:28.117 00:26:28.117 ' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.117 --rc genhtml_branch_coverage=1 00:26:28.117 --rc genhtml_function_coverage=1 00:26:28.117 --rc genhtml_legend=1 00:26:28.117 --rc geninfo_all_blocks=1 00:26:28.117 --rc geninfo_unexecuted_blocks=1 00:26:28.117 00:26:28.117 ' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.117 --rc genhtml_branch_coverage=1 00:26:28.117 --rc genhtml_function_coverage=1 00:26:28.117 --rc genhtml_legend=1 00:26:28.117 --rc geninfo_all_blocks=1 00:26:28.117 --rc geninfo_unexecuted_blocks=1 00:26:28.117 00:26:28.117 ' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.117 07:35:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:28.118 00:26:28.118 real 0m0.240s 00:26:28.118 user 0m0.133s 00:26:28.118 sys 0m0.122s 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.118 07:35:55 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:28.118 ************************************ 00:26:28.118 END TEST dma 00:26:28.118 ************************************ 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.118 ************************************ 00:26:28.118 START TEST nvmf_identify 00:26:28.118 ************************************ 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:28.118 * Looking for test storage... 00:26:28.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:26:28.118 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.380 --rc genhtml_branch_coverage=1 00:26:28.380 --rc genhtml_function_coverage=1 00:26:28.380 --rc genhtml_legend=1 00:26:28.380 --rc geninfo_all_blocks=1 00:26:28.380 --rc geninfo_unexecuted_blocks=1 00:26:28.380 00:26:28.380 ' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.380 --rc genhtml_branch_coverage=1 00:26:28.380 --rc genhtml_function_coverage=1 00:26:28.380 --rc genhtml_legend=1 00:26:28.380 --rc geninfo_all_blocks=1 00:26:28.380 --rc geninfo_unexecuted_blocks=1 00:26:28.380 00:26:28.380 ' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.380 --rc genhtml_branch_coverage=1 00:26:28.380 --rc genhtml_function_coverage=1 00:26:28.380 --rc genhtml_legend=1 00:26:28.380 --rc geninfo_all_blocks=1 00:26:28.380 --rc geninfo_unexecuted_blocks=1 00:26:28.380 00:26:28.380 ' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:28.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.380 --rc genhtml_branch_coverage=1 00:26:28.380 --rc genhtml_function_coverage=1 00:26:28.380 --rc genhtml_legend=1 00:26:28.380 --rc geninfo_all_blocks=1 00:26:28.380 --rc geninfo_unexecuted_blocks=1 00:26:28.380 00:26:28.380 ' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.380 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.381 07:35:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.526 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:36.527 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:36.527 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:36.527 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:36.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:26:36.527 00:26:36.527 --- 10.0.0.2 ping statistics --- 00:26:36.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.527 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:26:36.527 00:26:36.527 --- 10.0.0.1 ping statistics --- 00:26:36.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.527 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1554279 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1554279 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1554279 ']' 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.527 07:36:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.527 [2024-11-26 07:36:03.978139] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:36.527 [2024-11-26 07:36:03.978216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.527 [2024-11-26 07:36:04.080157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.527 [2024-11-26 07:36:04.135203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.528 [2024-11-26 07:36:04.135254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.528 [2024-11-26 07:36:04.135264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.528 [2024-11-26 07:36:04.135272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.528 [2024-11-26 07:36:04.135278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.528 [2024-11-26 07:36:04.137277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.528 [2024-11-26 07:36:04.137438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.528 [2024-11-26 07:36:04.137580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.528 [2024-11-26 07:36:04.137580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.789 [2024-11-26 07:36:04.814186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.789 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.052 Malloc0 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.052 [2024-11-26 07:36:04.936220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:37.052 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.053 [ 00:26:37.053 { 00:26:37.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:37.053 "subtype": "Discovery", 00:26:37.053 "listen_addresses": [ 00:26:37.053 { 00:26:37.053 "trtype": "TCP", 00:26:37.053 "adrfam": "IPv4", 00:26:37.053 "traddr": "10.0.0.2", 00:26:37.053 "trsvcid": "4420" 00:26:37.053 } 00:26:37.053 ], 00:26:37.053 "allow_any_host": true, 00:26:37.053 "hosts": [] 00:26:37.053 }, 00:26:37.053 { 00:26:37.053 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.053 "subtype": "NVMe", 00:26:37.053 "listen_addresses": [ 00:26:37.053 { 00:26:37.053 "trtype": "TCP", 00:26:37.053 "adrfam": "IPv4", 00:26:37.053 "traddr": "10.0.0.2", 00:26:37.053 "trsvcid": "4420" 00:26:37.053 } 00:26:37.053 ], 00:26:37.053 "allow_any_host": true, 00:26:37.053 "hosts": [], 00:26:37.053 "serial_number": "SPDK00000000000001", 00:26:37.053 "model_number": "SPDK bdev Controller", 00:26:37.053 "max_namespaces": 32, 00:26:37.053 "min_cntlid": 1, 00:26:37.053 "max_cntlid": 65519, 00:26:37.053 "namespaces": [ 00:26:37.053 { 00:26:37.053 "nsid": 1, 00:26:37.053 "bdev_name": "Malloc0", 00:26:37.053 "name": "Malloc0", 00:26:37.053 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:37.053 "eui64": "ABCDEF0123456789", 00:26:37.053 "uuid": "2444f887-57b0-49c1-b387-37b127af9b97" 00:26:37.053 } 00:26:37.053 ] 00:26:37.053 } 00:26:37.053 ] 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.053 07:36:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:37.053 [2024-11-26 07:36:05.002053] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:37.053 [2024-11-26 07:36:05.002121] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554629 ] 00:26:37.053 [2024-11-26 07:36:05.058884] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:37.053 [2024-11-26 07:36:05.058960] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:37.053 [2024-11-26 07:36:05.058966] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:37.053 [2024-11-26 07:36:05.058984] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:37.053 [2024-11-26 07:36:05.058999] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:37.053 [2024-11-26 07:36:05.062586] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:37.053 [2024-11-26 07:36:05.062639] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf75690 0 00:26:37.053 [2024-11-26 07:36:05.070180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:37.053 [2024-11-26 07:36:05.070197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:37.053 [2024-11-26 07:36:05.070204] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:37.053 [2024-11-26 07:36:05.070207] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:37.053 [2024-11-26 07:36:05.070254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.070261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.070265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.053 [2024-11-26 07:36:05.070284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:37.053 [2024-11-26 07:36:05.070309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.053 [2024-11-26 07:36:05.078175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.053 [2024-11-26 07:36:05.078186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.053 [2024-11-26 07:36:05.078190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.053 [2024-11-26 07:36:05.078209] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:37.053 [2024-11-26 07:36:05.078218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:37.053 [2024-11-26 07:36:05.078224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:37.053 [2024-11-26 07:36:05.078241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.053 [2024-11-26 07:36:05.078258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.053 [2024-11-26 07:36:05.078279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.053 [2024-11-26 07:36:05.078364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.053 [2024-11-26 07:36:05.078370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.053 [2024-11-26 07:36:05.078374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.053 [2024-11-26 07:36:05.078385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:37.053 [2024-11-26 07:36:05.078393] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:37.053 [2024-11-26 07:36:05.078400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.053 [2024-11-26 07:36:05.078415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.053 [2024-11-26 07:36:05.078426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.053 [2024-11-26 07:36:05.078502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.053 [2024-11-26 07:36:05.078509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.053 [2024-11-26 07:36:05.078512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.053 [2024-11-26 07:36:05.078522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:37.053 [2024-11-26 07:36:05.078531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:37.053 [2024-11-26 07:36:05.078537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.053 [2024-11-26 07:36:05.078545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.053 [2024-11-26 07:36:05.078552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.053 [2024-11-26 07:36:05.078562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.053 [2024-11-26 07:36:05.078636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.078643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.078646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.078656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:37.054 [2024-11-26 07:36:05.078666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.078681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.054 [2024-11-26 07:36:05.078692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.054 [2024-11-26 07:36:05.078773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.078782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.078785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.078795] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:37.054 [2024-11-26 07:36:05.078800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:37.054 [2024-11-26 07:36:05.078808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:37.054 [2024-11-26 07:36:05.078918] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:37.054 [2024-11-26 07:36:05.078923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:37.054 [2024-11-26 07:36:05.078933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.078941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.078948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.054 [2024-11-26 07:36:05.078959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.054 [2024-11-26 07:36:05.079031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.079037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.079041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.079049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:37.054 [2024-11-26 07:36:05.079060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.079074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.054 [2024-11-26 07:36:05.079085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.054 [2024-11-26 07:36:05.079147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.079153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.079156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.079175] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:37.054 [2024-11-26 07:36:05.079181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:37.054 [2024-11-26 07:36:05.079189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:37.054 [2024-11-26 07:36:05.079201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:37.054 [2024-11-26 07:36:05.079214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.079225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.054 [2024-11-26 07:36:05.079237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.054 [2024-11-26 07:36:05.079353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.054 [2024-11-26 07:36:05.079360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.054 [2024-11-26 07:36:05.079364] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079368] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf75690): datao=0, datal=4096, cccid=0 00:26:37.054 [2024-11-26 07:36:05.079373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd7100) on tqpair(0xf75690): expected_datao=0, payload_size=4096 00:26:37.054 [2024-11-26 07:36:05.079378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079394] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.079399] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.120231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.120235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.120250] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:37.054 [2024-11-26 07:36:05.120255] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:37.054 [2024-11-26 07:36:05.120260] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:37.054 [2024-11-26 07:36:05.120271] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:37.054 [2024-11-26 07:36:05.120276] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:37.054 [2024-11-26 07:36:05.120281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:37.054 [2024-11-26 07:36:05.120293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:37.054 [2024-11-26 07:36:05.120302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.120320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:37.054 [2024-11-26 07:36:05.120333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.054 [2024-11-26 07:36:05.120408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.054 [2024-11-26 07:36:05.120414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.054 [2024-11-26 07:36:05.120418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.054 [2024-11-26 07:36:05.120431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf75690) 00:26:37.054 [2024-11-26 07:36:05.120449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.054 [2024-11-26 07:36:05.120455] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.054 [2024-11-26 07:36:05.120462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.120468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.055 [2024-11-26 07:36:05.120475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.120488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.055 [2024-11-26 07:36:05.120494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.120507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.055 [2024-11-26 07:36:05.120512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:37.055 [2024-11-26 07:36:05.120521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:37.055 [2024-11-26 07:36:05.120527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.120538] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.055 [2024-11-26 07:36:05.120551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7100, cid 0, qid 0 00:26:37.055 [2024-11-26 07:36:05.120556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7280, cid 1, qid 0 00:26:37.055 [2024-11-26 07:36:05.120561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7400, cid 2, qid 0 00:26:37.055 [2024-11-26 07:36:05.120566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.055 [2024-11-26 07:36:05.120571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7700, cid 4, qid 0 00:26:37.055 [2024-11-26 07:36:05.120701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.055 [2024-11-26 07:36:05.120708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.055 [2024-11-26 07:36:05.120711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7700) on tqpair=0xf75690 00:26:37.055 [2024-11-26 07:36:05.120725] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:37.055 [2024-11-26 07:36:05.120730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:37.055 [2024-11-26 07:36:05.120742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.120753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.055 [2024-11-26 07:36:05.120766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7700, cid 4, qid 0 00:26:37.055 [2024-11-26 07:36:05.120847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.055 [2024-11-26 07:36:05.120853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.055 [2024-11-26 07:36:05.120857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120861] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf75690): datao=0, datal=4096, cccid=4 00:26:37.055 [2024-11-26 07:36:05.120865] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd7700) on tqpair(0xf75690): expected_datao=0, payload_size=4096 00:26:37.055 [2024-11-26 07:36:05.120870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120890] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120894] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.055 [2024-11-26 07:36:05.120937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.055 [2024-11-26 07:36:05.120941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120945] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7700) on tqpair=0xf75690 00:26:37.055 [2024-11-26 07:36:05.120960] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:37.055 [2024-11-26 07:36:05.120992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.120996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.121003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.055 [2024-11-26 07:36:05.121010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf75690) 00:26:37.055 [2024-11-26 07:36:05.121024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.055 [2024-11-26 07:36:05.121039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7700, cid 4, qid 0 00:26:37.055 [2024-11-26 07:36:05.121044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7880, cid 5, qid 0 00:26:37.055 [2024-11-26 07:36:05.121156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.055 [2024-11-26 07:36:05.121170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.055 [2024-11-26 07:36:05.121174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf75690): datao=0, datal=1024, cccid=4 00:26:37.055 [2024-11-26 07:36:05.121182] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd7700) on tqpair(0xf75690): expected_datao=0, payload_size=1024 00:26:37.055 [2024-11-26 07:36:05.121186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121193] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121197] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.055 [2024-11-26 07:36:05.121209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.055 [2024-11-26 07:36:05.121212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.055 [2024-11-26 07:36:05.121216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7880) on tqpair=0xf75690 00:26:37.319 [2024-11-26 07:36:05.165171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.319 [2024-11-26 07:36:05.165189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.319 [2024-11-26 07:36:05.165193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.165197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7700) on tqpair=0xf75690 00:26:37.319 [2024-11-26 07:36:05.165211] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.165215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf75690) 00:26:37.319 [2024-11-26 07:36:05.165223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-26 07:36:05.165241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7700, cid 4, qid 0 00:26:37.319 [2024-11-26 07:36:05.165376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.319 [2024-11-26 07:36:05.165383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.319 [2024-11-26 07:36:05.165387] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.165390] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf75690): datao=0, datal=3072, cccid=4 00:26:37.319 [2024-11-26 07:36:05.165395] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd7700) on tqpair(0xf75690): expected_datao=0, payload_size=3072 00:26:37.319 [2024-11-26 07:36:05.165399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.165435] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.165439] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.319 [2024-11-26 07:36:05.206249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.319 [2024-11-26 07:36:05.206253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7700) on tqpair=0xf75690 00:26:37.319 [2024-11-26 07:36:05.206267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf75690) 00:26:37.319 [2024-11-26 07:36:05.206278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.319 [2024-11-26 07:36:05.206294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7700, cid 4, qid 0 00:26:37.319 [2024-11-26 07:36:05.206384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.319 [2024-11-26 07:36:05.206391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.319 [2024-11-26 07:36:05.206394] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206398] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf75690): datao=0, datal=8, cccid=4 00:26:37.319 [2024-11-26 07:36:05.206402] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfd7700) on tqpair(0xf75690): expected_datao=0, payload_size=8 00:26:37.319 [2024-11-26 07:36:05.206407] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.206417] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.247217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.319 [2024-11-26 07:36:05.247227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.319 [2024-11-26 07:36:05.247231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.319 [2024-11-26 07:36:05.247235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7700) on tqpair=0xf75690 00:26:37.319 ===================================================== 00:26:37.319 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:37.319 ===================================================== 00:26:37.319 Controller Capabilities/Features 00:26:37.319 ================================ 00:26:37.319 Vendor ID: 0000 00:26:37.319 Subsystem Vendor ID: 0000 00:26:37.319 Serial Number: .................... 00:26:37.319 Model Number: ........................................ 00:26:37.319 Firmware Version: 25.01 00:26:37.319 Recommended Arb Burst: 0 00:26:37.319 IEEE OUI Identifier: 00 00 00 00:26:37.320 Multi-path I/O 00:26:37.320 May have multiple subsystem ports: No 00:26:37.320 May have multiple controllers: No 00:26:37.320 Associated with SR-IOV VF: No 00:26:37.320 Max Data Transfer Size: 131072 00:26:37.320 Max Number of Namespaces: 0 00:26:37.320 Max Number of I/O Queues: 1024 00:26:37.320 NVMe Specification Version (VS): 1.3 00:26:37.320 NVMe Specification Version (Identify): 1.3 00:26:37.320 Maximum Queue Entries: 128 00:26:37.320 Contiguous Queues Required: Yes 00:26:37.320 Arbitration Mechanisms Supported 00:26:37.320 Weighted Round Robin: Not Supported 00:26:37.320 Vendor Specific: Not Supported 00:26:37.320 Reset Timeout: 15000 ms 00:26:37.320 Doorbell Stride: 4 bytes 00:26:37.320 NVM Subsystem Reset: Not Supported 00:26:37.320 Command Sets Supported 00:26:37.320 NVM Command Set: Supported 00:26:37.320 Boot Partition: Not Supported 00:26:37.320 Memory Page Size Minimum: 4096 bytes 00:26:37.320 Memory Page Size Maximum: 4096 bytes 00:26:37.320 Persistent Memory Region: Not Supported 00:26:37.320 Optional Asynchronous Events Supported 00:26:37.320 Namespace Attribute Notices: Not Supported 00:26:37.320 Firmware Activation Notices: Not Supported 00:26:37.320 ANA Change Notices: Not Supported 00:26:37.320 PLE Aggregate Log Change Notices: Not Supported 00:26:37.320 LBA Status Info Alert Notices: Not Supported 00:26:37.320 EGE Aggregate Log Change Notices: Not Supported 00:26:37.320 Normal NVM Subsystem Shutdown event: Not Supported 00:26:37.320 Zone Descriptor Change Notices: Not Supported 00:26:37.320 Discovery Log Change Notices: Supported 00:26:37.320 Controller Attributes 00:26:37.320 128-bit Host Identifier: Not Supported 00:26:37.320 Non-Operational Permissive Mode: Not Supported 00:26:37.320 NVM Sets: Not Supported 00:26:37.320 Read Recovery Levels: Not Supported 00:26:37.320 Endurance Groups: Not Supported 00:26:37.320 Predictable Latency Mode: Not Supported 00:26:37.320 Traffic Based Keep ALive: Not Supported 00:26:37.320 Namespace Granularity: Not Supported 00:26:37.320 SQ Associations: Not Supported 00:26:37.320 UUID List: Not Supported 00:26:37.320 Multi-Domain Subsystem: Not Supported 00:26:37.320 Fixed Capacity Management: Not Supported 00:26:37.320 Variable Capacity Management: Not Supported 00:26:37.320 Delete Endurance Group: Not Supported 00:26:37.320 Delete NVM Set: Not Supported 00:26:37.320 Extended LBA Formats Supported: Not Supported 00:26:37.320 Flexible Data Placement Supported: Not Supported 00:26:37.320 00:26:37.320 Controller Memory Buffer Support 00:26:37.320 ================================ 00:26:37.320 Supported: No 00:26:37.320 00:26:37.320 Persistent Memory Region Support 00:26:37.320 ================================ 00:26:37.320 Supported: No 00:26:37.320 00:26:37.320 Admin Command Set Attributes 00:26:37.320 ============================ 00:26:37.320 Security Send/Receive: Not Supported 00:26:37.320 Format NVM: Not Supported 00:26:37.320 Firmware Activate/Download: Not Supported 00:26:37.320 Namespace Management: Not Supported 00:26:37.320 Device Self-Test: Not Supported 00:26:37.320 Directives: Not Supported 00:26:37.320 NVMe-MI: Not Supported 00:26:37.320 Virtualization Management: Not Supported 00:26:37.320 Doorbell Buffer Config: Not Supported 00:26:37.320 Get LBA Status Capability: Not Supported 00:26:37.320 Command & Feature Lockdown Capability: Not Supported 00:26:37.320 Abort Command Limit: 1 00:26:37.320 Async Event Request Limit: 4 00:26:37.320 Number of Firmware Slots: N/A 00:26:37.320 Firmware Slot 1 Read-Only: N/A 00:26:37.320 Firmware Activation Without Reset: N/A 00:26:37.320 Multiple Update Detection Support: N/A 00:26:37.320 Firmware Update Granularity: No Information Provided 00:26:37.320 Per-Namespace SMART Log: No 00:26:37.320 Asymmetric Namespace Access Log Page: Not Supported 00:26:37.320 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:37.320 Command Effects Log Page: Not Supported 00:26:37.320 Get Log Page Extended Data: Supported 00:26:37.320 Telemetry Log Pages: Not Supported 00:26:37.320 Persistent Event Log Pages: Not Supported 00:26:37.320 Supported Log Pages Log Page: May Support 00:26:37.320 Commands Supported & Effects Log Page: Not Supported 00:26:37.320 Feature Identifiers & Effects Log Page:May Support 00:26:37.320 NVMe-MI Commands & Effects Log Page: May Support 00:26:37.320 Data Area 4 for Telemetry Log: Not Supported 00:26:37.320 Error Log Page Entries Supported: 128 00:26:37.320 Keep Alive: Not Supported 00:26:37.320 00:26:37.320 NVM Command Set Attributes 00:26:37.320 ========================== 00:26:37.320 Submission Queue Entry Size 00:26:37.320 Max: 1 00:26:37.320 Min: 1 00:26:37.320 Completion Queue Entry Size 00:26:37.320 Max: 1 00:26:37.320 Min: 1 00:26:37.320 Number of Namespaces: 0 00:26:37.320 Compare Command: Not Supported 00:26:37.320 Write Uncorrectable Command: Not Supported 00:26:37.320 Dataset Management Command: Not Supported 00:26:37.320 Write Zeroes Command: Not Supported 00:26:37.320 Set Features Save Field: Not Supported 00:26:37.320 Reservations: Not Supported 00:26:37.320 Timestamp: Not Supported 00:26:37.320 Copy: Not Supported 00:26:37.320 Volatile Write Cache: Not Present 00:26:37.320 Atomic Write Unit (Normal): 1 00:26:37.320 Atomic Write Unit (PFail): 1 00:26:37.320 Atomic Compare & Write Unit: 1 00:26:37.320 Fused Compare & Write: Supported 00:26:37.320 Scatter-Gather List 00:26:37.320 SGL Command Set: Supported 00:26:37.320 SGL Keyed: Supported 00:26:37.320 SGL Bit Bucket Descriptor: Not Supported 00:26:37.320 SGL Metadata Pointer: Not Supported 00:26:37.320 Oversized SGL: Not Supported 00:26:37.320 SGL Metadata Address: Not Supported 00:26:37.320 SGL Offset: Supported 00:26:37.320 Transport SGL Data Block: Not Supported 00:26:37.320 Replay Protected Memory Block: Not Supported 00:26:37.320 00:26:37.320 Firmware Slot Information 00:26:37.320 ========================= 00:26:37.320 Active slot: 0 00:26:37.320 00:26:37.320 00:26:37.320 Error Log 00:26:37.320 ========= 00:26:37.320 00:26:37.320 Active Namespaces 00:26:37.320 ================= 00:26:37.320 Discovery Log Page 00:26:37.320 ================== 00:26:37.320 Generation Counter: 2 00:26:37.320 Number of Records: 2 00:26:37.320 Record Format: 0 00:26:37.320 00:26:37.320 Discovery Log Entry 0 00:26:37.320 ---------------------- 00:26:37.320 Transport Type: 3 (TCP) 00:26:37.320 Address Family: 1 (IPv4) 00:26:37.320 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:37.320 Entry Flags: 00:26:37.320 Duplicate Returned Information: 1 00:26:37.320 Explicit Persistent Connection Support for Discovery: 1 00:26:37.320 Transport Requirements: 00:26:37.320 Secure Channel: Not Required 00:26:37.320 Port ID: 0 (0x0000) 00:26:37.320 Controller ID: 65535 (0xffff) 00:26:37.320 Admin Max SQ Size: 128 00:26:37.320 Transport Service Identifier: 4420 00:26:37.320 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:37.320 Transport Address: 10.0.0.2 00:26:37.320 Discovery Log Entry 1 00:26:37.320 ---------------------- 00:26:37.320 Transport Type: 3 (TCP) 00:26:37.320 Address Family: 1 (IPv4) 00:26:37.320 Subsystem Type: 2 (NVM Subsystem) 00:26:37.320 Entry Flags: 00:26:37.320 Duplicate Returned Information: 0 00:26:37.320 Explicit Persistent Connection Support for Discovery: 0 00:26:37.320 Transport Requirements: 00:26:37.320 Secure Channel: Not Required 00:26:37.320 Port ID: 0 (0x0000) 00:26:37.320 Controller ID: 65535 (0xffff) 00:26:37.320 Admin Max SQ Size: 128 00:26:37.320 Transport Service Identifier: 4420 00:26:37.320 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:37.320 Transport Address: 10.0.0.2 [2024-11-26 07:36:05.247346] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:37.320 [2024-11-26 07:36:05.247360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7100) on tqpair=0xf75690 00:26:37.320 [2024-11-26 07:36:05.247367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-26 07:36:05.247373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7280) on tqpair=0xf75690 00:26:37.320 [2024-11-26 07:36:05.247378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-26 07:36:05.247383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7400) on tqpair=0xf75690 00:26:37.320 [2024-11-26 07:36:05.247388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.320 [2024-11-26 07:36:05.247393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.247397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.321 [2024-11-26 07:36:05.247410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.247426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.247442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.247508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.247515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.247518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.247530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.247545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.247558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.247658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.247664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.247668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.247677] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:37.321 [2024-11-26 07:36:05.247682] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:37.321 [2024-11-26 07:36:05.247694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.247709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.247719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.247809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.247818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.247821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.247836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.247850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.247861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.247944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.247951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.247954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.247968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.247976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.247982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.247993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.321 [2024-11-26 07:36:05.248803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.321 [2024-11-26 07:36:05.248813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.321 [2024-11-26 07:36:05.248919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.321 [2024-11-26 07:36:05.248925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.321 [2024-11-26 07:36:05.248929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.321 [2024-11-26 07:36:05.248946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.321 [2024-11-26 07:36:05.248954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.322 [2024-11-26 07:36:05.248961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.248972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.322 [2024-11-26 07:36:05.249040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.249046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.249050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.249054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.322 [2024-11-26 07:36:05.249064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.249068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.249071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.322 [2024-11-26 07:36:05.249078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.249088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.322 [2024-11-26 07:36:05.253170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.253178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.253182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.253186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.322 [2024-11-26 07:36:05.253196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.253200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.253204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf75690) 00:26:37.322 [2024-11-26 07:36:05.253211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.253222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfd7580, cid 3, qid 0 00:26:37.322 [2024-11-26 07:36:05.253302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.253309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.253313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.253316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfd7580) on tqpair=0xf75690 00:26:37.322 [2024-11-26 07:36:05.253325] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:26:37.322 00:26:37.322 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:37.322 [2024-11-26 07:36:05.301663] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:37.322 [2024-11-26 07:36:05.301705] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554632 ] 00:26:37.322 [2024-11-26 07:36:05.356726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:37.322 [2024-11-26 07:36:05.356792] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:37.322 [2024-11-26 07:36:05.356797] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:37.322 [2024-11-26 07:36:05.356824] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:37.322 [2024-11-26 07:36:05.356836] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:37.322 [2024-11-26 07:36:05.360476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:37.322 [2024-11-26 07:36:05.360517] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1487690 0 00:26:37.322 [2024-11-26 07:36:05.368178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:37.322 [2024-11-26 07:36:05.368195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:37.322 [2024-11-26 07:36:05.368200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:37.322 [2024-11-26 07:36:05.368204] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:37.322 [2024-11-26 07:36:05.368240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.368247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.368251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.322 [2024-11-26 07:36:05.368264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:37.322 [2024-11-26 07:36:05.368287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.322 [2024-11-26 07:36:05.375171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.375182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.375186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.322 [2024-11-26 07:36:05.375201] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:37.322 [2024-11-26 07:36:05.375209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:37.322 [2024-11-26 07:36:05.375215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:37.322 [2024-11-26 07:36:05.375229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.322 [2024-11-26 07:36:05.375246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.375263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.322 [2024-11-26 07:36:05.375344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.375351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.375355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.322 [2024-11-26 07:36:05.375364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:37.322 [2024-11-26 07:36:05.375372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:37.322 [2024-11-26 07:36:05.375380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.322 [2024-11-26 07:36:05.375399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.375411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.322 [2024-11-26 07:36:05.375485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.375492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.375496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.322 [2024-11-26 07:36:05.375505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:37.322 [2024-11-26 07:36:05.375513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:37.322 [2024-11-26 07:36:05.375520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.322 [2024-11-26 07:36:05.375534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.375545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.322 [2024-11-26 07:36:05.375613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.375620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.375623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.322 [2024-11-26 07:36:05.375632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:37.322 [2024-11-26 07:36:05.375641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.322 [2024-11-26 07:36:05.375656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.322 [2024-11-26 07:36:05.375666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.322 [2024-11-26 07:36:05.375731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.322 [2024-11-26 07:36:05.375738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.322 [2024-11-26 07:36:05.375741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.322 [2024-11-26 07:36:05.375745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.322 [2024-11-26 07:36:05.375750] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:37.322 [2024-11-26 07:36:05.375755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:37.322 [2024-11-26 07:36:05.375763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:37.322 [2024-11-26 07:36:05.375872] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:37.322 [2024-11-26 07:36:05.375879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:37.322 [2024-11-26 07:36:05.375888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.375892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.375895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.375902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.323 [2024-11-26 07:36:05.375914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.323 [2024-11-26 07:36:05.375993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.376000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.376003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.323 [2024-11-26 07:36:05.376012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:37.323 [2024-11-26 07:36:05.376022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.323 [2024-11-26 07:36:05.376047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.323 [2024-11-26 07:36:05.376108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.376114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.376118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.323 [2024-11-26 07:36:05.376126] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:37.323 [2024-11-26 07:36:05.376131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:37.323 [2024-11-26 07:36:05.376147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376157] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.323 [2024-11-26 07:36:05.376189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.323 [2024-11-26 07:36:05.376293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.323 [2024-11-26 07:36:05.376300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.323 [2024-11-26 07:36:05.376304] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376308] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=4096, cccid=0 00:26:37.323 [2024-11-26 07:36:05.376312] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9100) on tqpair(0x1487690): expected_datao=0, payload_size=4096 00:26:37.323 [2024-11-26 07:36:05.376320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376363] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376368] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.376457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.376460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.323 [2024-11-26 07:36:05.376472] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:37.323 [2024-11-26 07:36:05.376477] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:37.323 [2024-11-26 07:36:05.376482] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:37.323 [2024-11-26 07:36:05.376492] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:37.323 [2024-11-26 07:36:05.376497] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:37.323 [2024-11-26 07:36:05.376502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:37.323 [2024-11-26 07:36:05.376547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.323 [2024-11-26 07:36:05.376617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.376624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.376627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.323 [2024-11-26 07:36:05.376638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.323 [2024-11-26 07:36:05.376659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.323 [2024-11-26 07:36:05.376679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.323 [2024-11-26 07:36:05.376700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.323 [2024-11-26 07:36:05.376718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.323 [2024-11-26 07:36:05.376756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9100, cid 0, qid 0 00:26:37.323 [2024-11-26 07:36:05.376762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9280, cid 1, qid 0 00:26:37.323 [2024-11-26 07:36:05.376769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9400, cid 2, qid 0 00:26:37.323 [2024-11-26 07:36:05.376778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.323 [2024-11-26 07:36:05.376783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.323 [2024-11-26 07:36:05.376895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.376901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.376904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.323 [2024-11-26 07:36:05.376916] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:37.323 [2024-11-26 07:36:05.376922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:37.323 [2024-11-26 07:36:05.376944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.376952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.323 [2024-11-26 07:36:05.376958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:37.323 [2024-11-26 07:36:05.376969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.323 [2024-11-26 07:36:05.377040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.323 [2024-11-26 07:36:05.377046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.323 [2024-11-26 07:36:05.377050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.323 [2024-11-26 07:36:05.377054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.324 [2024-11-26 07:36:05.377124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:37.324 [2024-11-26 07:36:05.377136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:37.324 [2024-11-26 07:36:05.377144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.324 [2024-11-26 07:36:05.377148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.324 [2024-11-26 07:36:05.377154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.324 [2024-11-26 07:36:05.377175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.324 [2024-11-26 07:36:05.377276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.324 [2024-11-26 07:36:05.377285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.324 [2024-11-26 07:36:05.377293] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.324 [2024-11-26 07:36:05.377297] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=4096, cccid=4 00:26:37.324 [2024-11-26 07:36:05.377301] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9700) on tqpair(0x1487690): expected_datao=0, payload_size=4096 00:26:37.324 [2024-11-26 07:36:05.377306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.324 [2024-11-26 07:36:05.377320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.324 [2024-11-26 07:36:05.377324] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.420187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.420191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.420208] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:37.588 [2024-11-26 07:36:05.420229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.420239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.420248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.588 [2024-11-26 07:36:05.420260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.588 [2024-11-26 07:36:05.420275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.588 [2024-11-26 07:36:05.420407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.588 [2024-11-26 07:36:05.420417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.588 [2024-11-26 07:36:05.420424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420428] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=4096, cccid=4 00:26:37.588 [2024-11-26 07:36:05.420433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9700) on tqpair(0x1487690): expected_datao=0, payload_size=4096 00:26:37.588 [2024-11-26 07:36:05.420438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.420458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.461232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.461242] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.461266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.461277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.461287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.588 [2024-11-26 07:36:05.461299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.588 [2024-11-26 07:36:05.461313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.588 [2024-11-26 07:36:05.461392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.588 [2024-11-26 07:36:05.461403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.588 [2024-11-26 07:36:05.461409] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461413] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=4096, cccid=4 00:26:37.588 [2024-11-26 07:36:05.461418] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9700) on tqpair(0x1487690): expected_datao=0, payload_size=4096 00:26:37.588 [2024-11-26 07:36:05.461422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461436] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.461441] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.502232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.502239] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.502253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502296] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:37.588 [2024-11-26 07:36:05.502301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:37.588 [2024-11-26 07:36:05.502306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:37.588 [2024-11-26 07:36:05.502325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.588 [2024-11-26 07:36:05.502340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.588 [2024-11-26 07:36:05.502347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1487690) 00:26:37.588 [2024-11-26 07:36:05.502361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.588 [2024-11-26 07:36:05.502378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.588 [2024-11-26 07:36:05.502384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9880, cid 5, qid 0 00:26:37.588 [2024-11-26 07:36:05.502474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.502481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.502484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.502495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.502501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.502504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9880) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.502517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1487690) 00:26:37.588 [2024-11-26 07:36:05.502528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.588 [2024-11-26 07:36:05.502538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9880, cid 5, qid 0 00:26:37.588 [2024-11-26 07:36:05.502624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.588 [2024-11-26 07:36:05.502631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.588 [2024-11-26 07:36:05.502634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9880) on tqpair=0x1487690 00:26:37.588 [2024-11-26 07:36:05.502647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.588 [2024-11-26 07:36:05.502651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502668] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9880, cid 5, qid 0 00:26:37.589 [2024-11-26 07:36:05.502743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.502750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.502753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9880) on tqpair=0x1487690 00:26:37.589 [2024-11-26 07:36:05.502767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502771] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9880, cid 5, qid 0 00:26:37.589 [2024-11-26 07:36:05.502854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.502861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.502864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9880) on tqpair=0x1487690 00:26:37.589 [2024-11-26 07:36:05.502885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.502941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1487690) 00:26:37.589 [2024-11-26 07:36:05.502947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.589 [2024-11-26 07:36:05.502959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9880, cid 5, qid 0 00:26:37.589 [2024-11-26 07:36:05.502965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9700, cid 4, qid 0 00:26:37.589 [2024-11-26 07:36:05.502973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9a00, cid 6, qid 0 00:26:37.589 [2024-11-26 07:36:05.502981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9b80, cid 7, qid 0 00:26:37.589 [2024-11-26 07:36:05.503148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.589 [2024-11-26 07:36:05.503168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.589 [2024-11-26 07:36:05.503173] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=8192, cccid=5 00:26:37.589 [2024-11-26 07:36:05.503181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9880) on tqpair(0x1487690): expected_datao=0, payload_size=8192 00:26:37.589 [2024-11-26 07:36:05.503185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503246] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503252] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.589 [2024-11-26 07:36:05.503268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.589 [2024-11-26 07:36:05.503272] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503275] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=512, cccid=4 00:26:37.589 [2024-11-26 07:36:05.503280] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9700) on tqpair(0x1487690): expected_datao=0, payload_size=512 00:26:37.589 [2024-11-26 07:36:05.503284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503294] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503297] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.589 [2024-11-26 07:36:05.503309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.589 [2024-11-26 07:36:05.503312] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503315] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=512, cccid=6 00:26:37.589 [2024-11-26 07:36:05.503320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9a00) on tqpair(0x1487690): expected_datao=0, payload_size=512 00:26:37.589 [2024-11-26 07:36:05.503324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503330] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503334] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:37.589 [2024-11-26 07:36:05.503345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:37.589 [2024-11-26 07:36:05.503348] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503352] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1487690): datao=0, datal=4096, cccid=7 00:26:37.589 [2024-11-26 07:36:05.503356] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14e9b80) on tqpair(0x1487690): expected_datao=0, payload_size=4096 00:26:37.589 [2024-11-26 07:36:05.503361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503375] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.503378] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.544226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.544241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.544245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.544250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9880) on tqpair=0x1487690 00:26:37.589 [2024-11-26 07:36:05.544268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.544274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.544278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.544282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9700) on tqpair=0x1487690 00:26:37.589 [2024-11-26 07:36:05.544293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.544299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.544302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.544306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9a00) on tqpair=0x1487690 00:26:37.589 [2024-11-26 07:36:05.544313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.589 [2024-11-26 07:36:05.544319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.589 [2024-11-26 07:36:05.544322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.589 [2024-11-26 07:36:05.544326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9b80) on tqpair=0x1487690 00:26:37.589 ===================================================== 00:26:37.589 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.589 ===================================================== 00:26:37.589 Controller Capabilities/Features 00:26:37.589 ================================ 00:26:37.589 Vendor ID: 8086 00:26:37.589 Subsystem Vendor ID: 8086 00:26:37.589 Serial Number: SPDK00000000000001 00:26:37.589 Model Number: SPDK bdev Controller 00:26:37.589 Firmware Version: 25.01 00:26:37.589 Recommended Arb Burst: 6 00:26:37.589 IEEE OUI Identifier: e4 d2 5c 00:26:37.589 Multi-path I/O 00:26:37.589 May have multiple subsystem ports: Yes 00:26:37.589 May have multiple controllers: Yes 00:26:37.589 Associated with SR-IOV VF: No 00:26:37.589 Max Data Transfer Size: 131072 00:26:37.589 Max Number of Namespaces: 32 00:26:37.589 Max Number of I/O Queues: 127 00:26:37.589 NVMe Specification Version (VS): 1.3 00:26:37.589 NVMe Specification Version (Identify): 1.3 00:26:37.589 Maximum Queue Entries: 128 00:26:37.589 Contiguous Queues Required: Yes 00:26:37.589 Arbitration Mechanisms Supported 00:26:37.589 Weighted Round Robin: Not Supported 00:26:37.589 Vendor Specific: Not Supported 00:26:37.589 Reset Timeout: 15000 ms 00:26:37.589 Doorbell Stride: 4 bytes 00:26:37.589 NVM Subsystem Reset: Not Supported 00:26:37.589 Command Sets Supported 00:26:37.589 NVM Command Set: Supported 00:26:37.589 Boot Partition: Not Supported 00:26:37.589 Memory Page Size Minimum: 4096 bytes 00:26:37.589 Memory Page Size Maximum: 4096 bytes 00:26:37.589 Persistent Memory Region: Not Supported 00:26:37.589 Optional Asynchronous Events Supported 00:26:37.589 Namespace Attribute Notices: Supported 00:26:37.589 Firmware Activation Notices: Not Supported 00:26:37.589 ANA Change Notices: Not Supported 00:26:37.589 PLE Aggregate Log Change Notices: Not Supported 00:26:37.589 LBA Status Info Alert Notices: Not Supported 00:26:37.589 EGE Aggregate Log Change Notices: Not Supported 00:26:37.589 Normal NVM Subsystem Shutdown event: Not Supported 00:26:37.589 Zone Descriptor Change Notices: Not Supported 00:26:37.589 Discovery Log Change Notices: Not Supported 00:26:37.589 Controller Attributes 00:26:37.590 128-bit Host Identifier: Supported 00:26:37.590 Non-Operational Permissive Mode: Not Supported 00:26:37.590 NVM Sets: Not Supported 00:26:37.590 Read Recovery Levels: Not Supported 00:26:37.590 Endurance Groups: Not Supported 00:26:37.590 Predictable Latency Mode: Not Supported 00:26:37.590 Traffic Based Keep ALive: Not Supported 00:26:37.590 Namespace Granularity: Not Supported 00:26:37.590 SQ Associations: Not Supported 00:26:37.590 UUID List: Not Supported 00:26:37.590 Multi-Domain Subsystem: Not Supported 00:26:37.590 Fixed Capacity Management: Not Supported 00:26:37.590 Variable Capacity Management: Not Supported 00:26:37.590 Delete Endurance Group: Not Supported 00:26:37.590 Delete NVM Set: Not Supported 00:26:37.590 Extended LBA Formats Supported: Not Supported 00:26:37.590 Flexible Data Placement Supported: Not Supported 00:26:37.590 00:26:37.590 Controller Memory Buffer Support 00:26:37.590 ================================ 00:26:37.590 Supported: No 00:26:37.590 00:26:37.590 Persistent Memory Region Support 00:26:37.590 ================================ 00:26:37.590 Supported: No 00:26:37.590 00:26:37.590 Admin Command Set Attributes 00:26:37.590 ============================ 00:26:37.590 Security Send/Receive: Not Supported 00:26:37.590 Format NVM: Not Supported 00:26:37.590 Firmware Activate/Download: Not Supported 00:26:37.590 Namespace Management: Not Supported 00:26:37.590 Device Self-Test: Not Supported 00:26:37.590 Directives: Not Supported 00:26:37.590 NVMe-MI: Not Supported 00:26:37.590 Virtualization Management: Not Supported 00:26:37.590 Doorbell Buffer Config: Not Supported 00:26:37.590 Get LBA Status Capability: Not Supported 00:26:37.590 Command & Feature Lockdown Capability: Not Supported 00:26:37.590 Abort Command Limit: 4 00:26:37.590 Async Event Request Limit: 4 00:26:37.590 Number of Firmware Slots: N/A 00:26:37.590 Firmware Slot 1 Read-Only: N/A 00:26:37.590 Firmware Activation Without Reset: N/A 00:26:37.590 Multiple Update Detection Support: N/A 00:26:37.590 Firmware Update Granularity: No Information Provided 00:26:37.590 Per-Namespace SMART Log: No 00:26:37.590 Asymmetric Namespace Access Log Page: Not Supported 00:26:37.590 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:37.590 Command Effects Log Page: Supported 00:26:37.590 Get Log Page Extended Data: Supported 00:26:37.590 Telemetry Log Pages: Not Supported 00:26:37.590 Persistent Event Log Pages: Not Supported 00:26:37.590 Supported Log Pages Log Page: May Support 00:26:37.590 Commands Supported & Effects Log Page: Not Supported 00:26:37.590 Feature Identifiers & Effects Log Page:May Support 00:26:37.590 NVMe-MI Commands & Effects Log Page: May Support 00:26:37.590 Data Area 4 for Telemetry Log: Not Supported 00:26:37.590 Error Log Page Entries Supported: 128 00:26:37.590 Keep Alive: Supported 00:26:37.590 Keep Alive Granularity: 10000 ms 00:26:37.590 00:26:37.590 NVM Command Set Attributes 00:26:37.590 ========================== 00:26:37.590 Submission Queue Entry Size 00:26:37.590 Max: 64 00:26:37.590 Min: 64 00:26:37.590 Completion Queue Entry Size 00:26:37.590 Max: 16 00:26:37.590 Min: 16 00:26:37.590 Number of Namespaces: 32 00:26:37.590 Compare Command: Supported 00:26:37.590 Write Uncorrectable Command: Not Supported 00:26:37.590 Dataset Management Command: Supported 00:26:37.590 Write Zeroes Command: Supported 00:26:37.590 Set Features Save Field: Not Supported 00:26:37.590 Reservations: Supported 00:26:37.590 Timestamp: Not Supported 00:26:37.590 Copy: Supported 00:26:37.590 Volatile Write Cache: Present 00:26:37.590 Atomic Write Unit (Normal): 1 00:26:37.590 Atomic Write Unit (PFail): 1 00:26:37.590 Atomic Compare & Write Unit: 1 00:26:37.590 Fused Compare & Write: Supported 00:26:37.590 Scatter-Gather List 00:26:37.590 SGL Command Set: Supported 00:26:37.590 SGL Keyed: Supported 00:26:37.590 SGL Bit Bucket Descriptor: Not Supported 00:26:37.590 SGL Metadata Pointer: Not Supported 00:26:37.590 Oversized SGL: Not Supported 00:26:37.590 SGL Metadata Address: Not Supported 00:26:37.590 SGL Offset: Supported 00:26:37.590 Transport SGL Data Block: Not Supported 00:26:37.590 Replay Protected Memory Block: Not Supported 00:26:37.590 00:26:37.590 Firmware Slot Information 00:26:37.590 ========================= 00:26:37.590 Active slot: 1 00:26:37.590 Slot 1 Firmware Revision: 25.01 00:26:37.590 00:26:37.590 00:26:37.590 Commands Supported and Effects 00:26:37.590 ============================== 00:26:37.590 Admin Commands 00:26:37.590 -------------- 00:26:37.590 Get Log Page (02h): Supported 00:26:37.590 Identify (06h): Supported 00:26:37.590 Abort (08h): Supported 00:26:37.590 Set Features (09h): Supported 00:26:37.590 Get Features (0Ah): Supported 00:26:37.590 Asynchronous Event Request (0Ch): Supported 00:26:37.590 Keep Alive (18h): Supported 00:26:37.590 I/O Commands 00:26:37.590 ------------ 00:26:37.590 Flush (00h): Supported LBA-Change 00:26:37.590 Write (01h): Supported LBA-Change 00:26:37.590 Read (02h): Supported 00:26:37.590 Compare (05h): Supported 00:26:37.590 Write Zeroes (08h): Supported LBA-Change 00:26:37.590 Dataset Management (09h): Supported LBA-Change 00:26:37.590 Copy (19h): Supported LBA-Change 00:26:37.590 00:26:37.590 Error Log 00:26:37.590 ========= 00:26:37.590 00:26:37.590 Arbitration 00:26:37.590 =========== 00:26:37.590 Arbitration Burst: 1 00:26:37.590 00:26:37.590 Power Management 00:26:37.590 ================ 00:26:37.590 Number of Power States: 1 00:26:37.590 Current Power State: Power State #0 00:26:37.590 Power State #0: 00:26:37.590 Max Power: 0.00 W 00:26:37.590 Non-Operational State: Operational 00:26:37.590 Entry Latency: Not Reported 00:26:37.590 Exit Latency: Not Reported 00:26:37.590 Relative Read Throughput: 0 00:26:37.590 Relative Read Latency: 0 00:26:37.590 Relative Write Throughput: 0 00:26:37.590 Relative Write Latency: 0 00:26:37.590 Idle Power: Not Reported 00:26:37.590 Active Power: Not Reported 00:26:37.590 Non-Operational Permissive Mode: Not Supported 00:26:37.590 00:26:37.590 Health Information 00:26:37.590 ================== 00:26:37.590 Critical Warnings: 00:26:37.590 Available Spare Space: OK 00:26:37.590 Temperature: OK 00:26:37.590 Device Reliability: OK 00:26:37.590 Read Only: No 00:26:37.590 Volatile Memory Backup: OK 00:26:37.590 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:37.590 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:37.590 Available Spare: 0% 00:26:37.590 Available Spare Threshold: 0% 00:26:37.590 Life Percentage Used:[2024-11-26 07:36:05.544434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.590 [2024-11-26 07:36:05.544439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1487690) 00:26:37.590 [2024-11-26 07:36:05.544448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.590 [2024-11-26 07:36:05.544462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9b80, cid 7, qid 0 00:26:37.590 [2024-11-26 07:36:05.544530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.590 [2024-11-26 07:36:05.544537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.590 [2024-11-26 07:36:05.544541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.590 [2024-11-26 07:36:05.544545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9b80) on tqpair=0x1487690 00:26:37.590 [2024-11-26 07:36:05.544582] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:37.590 [2024-11-26 07:36:05.544592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9100) on tqpair=0x1487690 00:26:37.590 [2024-11-26 07:36:05.544599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.590 [2024-11-26 07:36:05.544604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9280) on tqpair=0x1487690 00:26:37.590 [2024-11-26 07:36:05.544609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.590 [2024-11-26 07:36:05.544614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9400) on tqpair=0x1487690 00:26:37.590 [2024-11-26 07:36:05.544619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.590 [2024-11-26 07:36:05.544623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.590 [2024-11-26 07:36:05.544628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.590 [2024-11-26 07:36:05.544637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.590 [2024-11-26 07:36:05.544641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.590 [2024-11-26 07:36:05.544644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.590 [2024-11-26 07:36:05.544651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.590 [2024-11-26 07:36:05.544664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.544733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.544740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.544743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.544755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.591 [2024-11-26 07:36:05.544769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.591 [2024-11-26 07:36:05.544782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.544855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.544861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.544865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.544874] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:37.591 [2024-11-26 07:36:05.544879] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:37.591 [2024-11-26 07:36:05.544889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.591 [2024-11-26 07:36:05.544906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.591 [2024-11-26 07:36:05.544916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.544983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.544990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.544993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.544997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.545007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.545011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.545015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.591 [2024-11-26 07:36:05.545021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.591 [2024-11-26 07:36:05.545032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.545093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.545099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.545103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.545106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.545116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.545120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.545124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.591 [2024-11-26 07:36:05.545130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.591 [2024-11-26 07:36:05.545141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.549169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.549178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.549182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.549186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.549196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.549200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.549204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1487690) 00:26:37.591 [2024-11-26 07:36:05.549211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.591 [2024-11-26 07:36:05.549223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14e9580, cid 3, qid 0 00:26:37.591 [2024-11-26 07:36:05.549323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:37.591 [2024-11-26 07:36:05.549330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:37.591 [2024-11-26 07:36:05.549333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:37.591 [2024-11-26 07:36:05.549337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14e9580) on tqpair=0x1487690 00:26:37.591 [2024-11-26 07:36:05.549346] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:26:37.591 0% 00:26:37.591 Data Units Read: 0 00:26:37.591 Data Units Written: 0 00:26:37.591 Host Read Commands: 0 00:26:37.591 Host Write Commands: 0 00:26:37.591 Controller Busy Time: 0 minutes 00:26:37.591 Power Cycles: 0 00:26:37.591 Power On Hours: 0 hours 00:26:37.591 Unsafe Shutdowns: 0 00:26:37.591 Unrecoverable Media Errors: 0 00:26:37.591 Lifetime Error Log Entries: 0 00:26:37.591 Warning Temperature Time: 0 minutes 00:26:37.591 Critical Temperature Time: 0 minutes 00:26:37.591 00:26:37.591 Number of Queues 00:26:37.591 ================ 00:26:37.591 Number of I/O Submission Queues: 127 00:26:37.591 Number of I/O Completion Queues: 127 00:26:37.591 00:26:37.591 Active Namespaces 00:26:37.591 ================= 00:26:37.591 Namespace ID:1 00:26:37.591 Error Recovery Timeout: Unlimited 00:26:37.591 Command Set Identifier: NVM (00h) 00:26:37.591 Deallocate: Supported 00:26:37.591 Deallocated/Unwritten Error: Not Supported 00:26:37.591 Deallocated Read Value: Unknown 00:26:37.591 Deallocate in Write Zeroes: Not Supported 00:26:37.591 Deallocated Guard Field: 0xFFFF 00:26:37.591 Flush: Supported 00:26:37.591 Reservation: Supported 00:26:37.591 Namespace Sharing Capabilities: Multiple Controllers 00:26:37.591 Size (in LBAs): 131072 (0GiB) 00:26:37.591 Capacity (in LBAs): 131072 (0GiB) 00:26:37.591 Utilization (in LBAs): 131072 (0GiB) 00:26:37.591 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:37.591 EUI64: ABCDEF0123456789 00:26:37.591 UUID: 2444f887-57b0-49c1-b387-37b127af9b97 00:26:37.591 Thin Provisioning: Not Supported 00:26:37.591 Per-NS Atomic Units: Yes 00:26:37.591 Atomic Boundary Size (Normal): 0 00:26:37.591 Atomic Boundary Size (PFail): 0 00:26:37.591 Atomic Boundary Offset: 0 00:26:37.591 Maximum Single Source Range Length: 65535 00:26:37.591 Maximum Copy Length: 65535 00:26:37.591 Maximum Source Range Count: 1 00:26:37.591 NGUID/EUI64 Never Reused: No 00:26:37.591 Namespace Write Protected: No 00:26:37.591 Number of LBA Formats: 1 00:26:37.591 Current LBA Format: LBA Format #00 00:26:37.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:37.591 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:37.591 rmmod nvme_tcp 00:26:37.591 rmmod nvme_fabrics 00:26:37.591 rmmod nvme_keyring 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1554279 ']' 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1554279 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1554279 ']' 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1554279 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.591 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554279 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554279' 00:26:37.853 killing process with pid 1554279 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1554279 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1554279 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.853 07:36:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.403 07:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:40.403 00:26:40.403 real 0m11.918s 00:26:40.403 user 0m9.243s 00:26:40.403 sys 0m6.313s 00:26:40.403 07:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:40.403 07:36:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:40.403 ************************************ 00:26:40.403 END TEST nvmf_identify 00:26:40.403 ************************************ 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.403 ************************************ 00:26:40.403 START TEST nvmf_perf 00:26:40.403 ************************************ 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:40.403 * Looking for test storage... 00:26:40.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:40.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.403 --rc genhtml_branch_coverage=1 00:26:40.403 --rc genhtml_function_coverage=1 00:26:40.403 --rc genhtml_legend=1 00:26:40.403 --rc geninfo_all_blocks=1 00:26:40.403 --rc geninfo_unexecuted_blocks=1 00:26:40.403 00:26:40.403 ' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:40.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.403 --rc genhtml_branch_coverage=1 00:26:40.403 --rc genhtml_function_coverage=1 00:26:40.403 --rc genhtml_legend=1 00:26:40.403 --rc geninfo_all_blocks=1 00:26:40.403 --rc geninfo_unexecuted_blocks=1 00:26:40.403 00:26:40.403 ' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:40.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.403 --rc genhtml_branch_coverage=1 00:26:40.403 --rc genhtml_function_coverage=1 00:26:40.403 --rc genhtml_legend=1 00:26:40.403 --rc geninfo_all_blocks=1 00:26:40.403 --rc geninfo_unexecuted_blocks=1 00:26:40.403 00:26:40.403 ' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:40.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.403 --rc genhtml_branch_coverage=1 00:26:40.403 --rc genhtml_function_coverage=1 00:26:40.403 --rc genhtml_legend=1 00:26:40.403 --rc geninfo_all_blocks=1 00:26:40.403 --rc geninfo_unexecuted_blocks=1 00:26:40.403 00:26:40.403 ' 00:26:40.403 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.404 07:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.552 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:48.553 00:26:48.553 --- 10.0.0.2 ping statistics --- 00:26:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.553 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:26:48.553 00:26:48.553 --- 10.0.0.1 ping statistics --- 00:26:48.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.553 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1559419 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1559419 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1559419 ']' 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.553 07:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.553 [2024-11-26 07:36:15.952563] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:26:48.553 [2024-11-26 07:36:15.952654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.553 [2024-11-26 07:36:16.052659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.553 [2024-11-26 07:36:16.105135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.553 [2024-11-26 07:36:16.105190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.553 [2024-11-26 07:36:16.105199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.553 [2024-11-26 07:36:16.105206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.553 [2024-11-26 07:36:16.105212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.553 [2024-11-26 07:36:16.107248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.553 [2024-11-26 07:36:16.107396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.553 [2024-11-26 07:36:16.107557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.553 [2024-11-26 07:36:16.107558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:48.815 07:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:49.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:49.387 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:49.648 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:49.648 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:49.910 [2024-11-26 07:36:17.937129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.910 07:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.171 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:50.171 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.432 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:50.433 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:50.694 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.694 [2024-11-26 07:36:18.748840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.694 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.955 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:50.955 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:50.955 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:50.955 07:36:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:52.343 Initializing NVMe Controllers 00:26:52.343 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:52.343 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:52.343 Initialization complete. Launching workers. 00:26:52.343 ======================================================== 00:26:52.343 Latency(us) 00:26:52.343 Device Information : IOPS MiB/s Average min max 00:26:52.343 PCIE (0000:65:00.0) NSID 1 from core 0: 78896.13 308.19 405.12 13.31 5205.15 00:26:52.343 ======================================================== 00:26:52.343 Total : 78896.13 308.19 405.12 13.31 5205.15 00:26:52.343 00:26:52.343 07:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.727 Initializing NVMe Controllers 00:26:53.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:53.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:53.727 Initialization complete. Launching workers. 00:26:53.727 ======================================================== 00:26:53.727 Latency(us) 00:26:53.727 Device Information : IOPS MiB/s Average min max 00:26:53.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10295.39 117.97 45553.00 00:26:53.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 17291.89 4986.77 47888.24 00:26:53.727 ======================================================== 00:26:53.727 Total : 160.00 0.62 12919.08 117.97 47888.24 00:26:53.727 00:26:53.727 07:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:55.112 Initializing NVMe Controllers 00:26:55.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.112 Initialization complete. Launching workers. 00:26:55.112 ======================================================== 00:26:55.112 Latency(us) 00:26:55.112 Device Information : IOPS MiB/s Average min max 00:26:55.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11837.91 46.24 2702.98 485.97 6717.96 00:26:55.112 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3727.14 14.56 8599.40 6164.35 16095.38 00:26:55.112 ======================================================== 00:26:55.112 Total : 15565.05 60.80 4114.91 485.97 16095.38 00:26:55.112 00:26:55.112 07:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:55.112 07:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:55.112 07:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:57.662 Initializing NVMe Controllers 00:26:57.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.662 Controller IO queue size 128, less than required. 00:26:57.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.662 Controller IO queue size 128, less than required. 00:26:57.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:57.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:57.662 Initialization complete. Launching workers. 00:26:57.662 ======================================================== 00:26:57.662 Latency(us) 00:26:57.662 Device Information : IOPS MiB/s Average min max 00:26:57.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2427.96 606.99 53629.85 32924.62 96349.02 00:26:57.662 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.49 149.37 220333.62 64956.62 362628.48 00:26:57.662 ======================================================== 00:26:57.662 Total : 3025.45 756.36 86551.85 32924.62 362628.48 00:26:57.662 00:26:57.662 07:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:57.662 No valid NVMe controllers or AIO or URING devices found 00:26:57.662 Initializing NVMe Controllers 00:26:57.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:57.662 Controller IO queue size 128, less than required. 00:26:57.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.662 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:57.662 Controller IO queue size 128, less than required. 00:26:57.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:57.662 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:57.662 WARNING: Some requested NVMe devices were skipped 00:26:57.662 07:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:00.316 Initializing NVMe Controllers 00:27:00.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.316 Controller IO queue size 128, less than required. 00:27:00.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.316 Controller IO queue size 128, less than required. 00:27:00.316 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:00.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:00.316 Initialization complete. Launching workers. 00:27:00.316 00:27:00.316 ==================== 00:27:00.316 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:00.316 TCP transport: 00:27:00.316 polls: 34447 00:27:00.316 idle_polls: 21029 00:27:00.316 sock_completions: 13418 00:27:00.316 nvme_completions: 7217 00:27:00.316 submitted_requests: 10752 00:27:00.316 queued_requests: 1 00:27:00.316 00:27:00.316 ==================== 00:27:00.316 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:00.316 TCP transport: 00:27:00.316 polls: 32243 00:27:00.316 idle_polls: 18990 00:27:00.316 sock_completions: 13253 00:27:00.316 nvme_completions: 8525 00:27:00.316 submitted_requests: 12880 00:27:00.316 queued_requests: 1 00:27:00.316 ======================================================== 00:27:00.316 Latency(us) 00:27:00.316 Device Information : IOPS MiB/s Average min max 00:27:00.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1800.86 450.21 72819.16 39492.82 131368.51 00:27:00.316 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2127.29 531.82 60304.67 24540.79 117433.47 00:27:00.316 ======================================================== 00:27:00.316 Total : 3928.15 982.04 66041.94 24540.79 131368.51 00:27:00.316 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.316 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:00.317 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.317 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:00.317 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.317 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.317 rmmod nvme_tcp 00:27:00.577 rmmod nvme_fabrics 00:27:00.577 rmmod nvme_keyring 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1559419 ']' 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1559419 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1559419 ']' 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1559419 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1559419 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1559419' 00:27:00.577 killing process with pid 1559419 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1559419 00:27:00.577 07:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1559419 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.487 07:36:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.033 00:27:05.033 real 0m24.468s 00:27:05.033 user 0m59.077s 00:27:05.033 sys 0m8.704s 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:05.033 ************************************ 00:27:05.033 END TEST nvmf_perf 00:27:05.033 ************************************ 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.033 ************************************ 00:27:05.033 START TEST nvmf_fio_host 00:27:05.033 ************************************ 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:05.033 * Looking for test storage... 00:27:05.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:05.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.033 --rc genhtml_branch_coverage=1 00:27:05.033 --rc genhtml_function_coverage=1 00:27:05.033 --rc genhtml_legend=1 00:27:05.033 --rc geninfo_all_blocks=1 00:27:05.033 --rc geninfo_unexecuted_blocks=1 00:27:05.033 00:27:05.033 ' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:05.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.033 --rc genhtml_branch_coverage=1 00:27:05.033 --rc genhtml_function_coverage=1 00:27:05.033 --rc genhtml_legend=1 00:27:05.033 --rc geninfo_all_blocks=1 00:27:05.033 --rc geninfo_unexecuted_blocks=1 00:27:05.033 00:27:05.033 ' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:05.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.033 --rc genhtml_branch_coverage=1 00:27:05.033 --rc genhtml_function_coverage=1 00:27:05.033 --rc genhtml_legend=1 00:27:05.033 --rc geninfo_all_blocks=1 00:27:05.033 --rc geninfo_unexecuted_blocks=1 00:27:05.033 00:27:05.033 ' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:05.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.033 --rc genhtml_branch_coverage=1 00:27:05.033 --rc genhtml_function_coverage=1 00:27:05.033 --rc genhtml_legend=1 00:27:05.033 --rc geninfo_all_blocks=1 00:27:05.033 --rc geninfo_unexecuted_blocks=1 00:27:05.033 00:27:05.033 ' 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.033 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:05.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.034 07:36:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:13.175 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:13.175 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.175 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:13.176 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:13.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:27:13.176 00:27:13.176 --- 10.0.0.2 ping statistics --- 00:27:13.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.176 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:27:13.176 00:27:13.176 --- 10.0.0.1 ping statistics --- 00:27:13.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.176 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1566339 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1566339 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1566339 ']' 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.176 07:36:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.176 [2024-11-26 07:36:40.440317] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:27:13.176 [2024-11-26 07:36:40.440385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.176 [2024-11-26 07:36:40.540477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.176 [2024-11-26 07:36:40.594193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.176 [2024-11-26 07:36:40.594245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.176 [2024-11-26 07:36:40.594254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.176 [2024-11-26 07:36:40.594261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.176 [2024-11-26 07:36:40.594268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.176 [2024-11-26 07:36:40.596283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.176 [2024-11-26 07:36:40.596443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.176 [2024-11-26 07:36:40.596607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.176 [2024-11-26 07:36:40.596607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.438 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:13.439 [2024-11-26 07:36:41.433241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.439 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:13.700 Malloc1 00:27:13.700 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.961 07:36:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:14.223 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.223 [2024-11-26 07:36:42.307108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.485 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:14.485 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:14.485 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:14.485 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:14.485 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:14.486 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:14.777 07:36:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:15.037 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:15.037 fio-3.35 00:27:15.037 Starting 1 thread 00:27:17.584 00:27:17.584 test: (groupid=0, jobs=1): err= 0: pid=1567021: Tue Nov 26 07:36:45 2024 00:27:17.584 read: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2004msec) 00:27:17.584 slat (usec): min=2, max=295, avg= 2.16, stdev= 2.47 00:27:17.584 clat (usec): min=3023, max=9053, avg=5071.42, stdev=356.29 00:27:17.584 lat (usec): min=3025, max=9055, avg=5073.58, stdev=356.39 00:27:17.584 clat percentiles (usec): 00:27:17.584 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:27:17.584 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:27:17.584 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:27:17.584 | 99.00th=[ 5866], 99.50th=[ 6194], 99.90th=[ 7439], 99.95th=[ 7701], 00:27:17.584 | 99.99th=[ 8848] 00:27:17.584 bw ( KiB/s): min=54168, max=55968, per=99.96%, avg=55442.00, stdev=853.25, samples=4 00:27:17.584 iops : min=13542, max=13992, avg=13860.50, stdev=213.31, samples=4 00:27:17.584 write: IOPS=13.9k, BW=54.2MiB/s (56.8MB/s)(109MiB/2004msec); 0 zone resets 00:27:17.584 slat (usec): min=2, max=268, avg= 2.23, stdev= 1.77 00:27:17.584 clat (usec): min=2623, max=8037, avg=4098.28, stdev=301.51 00:27:17.584 lat (usec): min=2625, max=8039, avg=4100.51, stdev=301.68 00:27:17.584 clat percentiles (usec): 00:27:17.584 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:27:17.584 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4146], 00:27:17.584 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:27:17.584 | 99.00th=[ 4752], 99.50th=[ 5276], 99.90th=[ 6194], 99.95th=[ 7046], 00:27:17.584 | 99.99th=[ 7701] 00:27:17.584 bw ( KiB/s): min=54568, max=55936, per=99.98%, avg=55488.00, stdev=621.64, samples=4 00:27:17.584 iops : min=13642, max=13984, avg=13872.00, stdev=155.41, samples=4 00:27:17.584 lat (msec) : 4=17.90%, 10=82.10% 00:27:17.584 cpu : usr=74.39%, sys=24.36%, ctx=27, majf=0, minf=17 00:27:17.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:17.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:17.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:17.584 issued rwts: total=27788,27804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:17.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:17.584 00:27:17.584 Run status group 0 (all jobs): 00:27:17.584 READ: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2004-2004msec 00:27:17.584 WRITE: bw=54.2MiB/s (56.8MB/s), 54.2MiB/s-54.2MiB/s (56.8MB/s-56.8MB/s), io=109MiB (114MB), run=2004-2004msec 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:17.584 07:36:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:17.844 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:17.844 fio-3.35 00:27:17.844 Starting 1 thread 00:27:20.387 00:27:20.387 test: (groupid=0, jobs=1): err= 0: pid=1567839: Tue Nov 26 07:36:48 2024 00:27:20.387 read: IOPS=9625, BW=150MiB/s (158MB/s)(302MiB/2005msec) 00:27:20.387 slat (usec): min=3, max=111, avg= 3.63, stdev= 1.61 00:27:20.387 clat (usec): min=2294, max=24670, avg=8061.88, stdev=2043.86 00:27:20.387 lat (usec): min=2298, max=24673, avg=8065.51, stdev=2044.08 00:27:20.387 clat percentiles (usec): 00:27:20.387 | 1.00th=[ 4146], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6259], 00:27:20.387 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8586], 00:27:20.387 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11076], 00:27:20.387 | 99.00th=[12911], 99.50th=[14746], 99.90th=[21890], 99.95th=[23725], 00:27:20.387 | 99.99th=[24773] 00:27:20.387 bw ( KiB/s): min=66592, max=85536, per=49.93%, avg=76896.00, stdev=8155.59, samples=4 00:27:20.387 iops : min= 4162, max= 5346, avg=4806.00, stdev=509.72, samples=4 00:27:20.387 write: IOPS=5688, BW=88.9MiB/s (93.2MB/s)(157MiB/1769msec); 0 zone resets 00:27:20.387 slat (usec): min=39, max=333, avg=41.43, stdev= 8.62 00:27:20.387 clat (usec): min=2433, max=25499, avg=9024.11, stdev=1723.26 00:27:20.387 lat (usec): min=2473, max=25539, avg=9065.55, stdev=1727.02 00:27:20.387 clat percentiles (usec): 00:27:20.387 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 7767], 00:27:20.387 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:27:20.387 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11338], 00:27:20.387 | 99.00th=[14746], 99.50th=[19006], 99.90th=[23987], 99.95th=[24511], 00:27:20.387 | 99.99th=[25035] 00:27:20.387 bw ( KiB/s): min=68736, max=89088, per=87.97%, avg=80064.00, stdev=8934.86, samples=4 00:27:20.387 iops : min= 4296, max= 5568, avg=5004.00, stdev=558.43, samples=4 00:27:20.387 lat (msec) : 4=0.64%, 10=82.37%, 20=16.71%, 50=0.29% 00:27:20.387 cpu : usr=89.02%, sys=9.93%, ctx=11, majf=0, minf=33 00:27:20.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:20.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:20.388 issued rwts: total=19299,10063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:20.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:20.388 00:27:20.388 Run status group 0 (all jobs): 00:27:20.388 READ: bw=150MiB/s (158MB/s), 150MiB/s-150MiB/s (158MB/s-158MB/s), io=302MiB (316MB), run=2005-2005msec 00:27:20.388 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=157MiB (165MB), run=1769-1769msec 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:20.388 rmmod nvme_tcp 00:27:20.388 rmmod nvme_fabrics 00:27:20.388 rmmod nvme_keyring 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1566339 ']' 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1566339 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1566339 ']' 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1566339 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.388 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566339 00:27:20.648 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.648 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566339' 00:27:20.649 killing process with pid 1566339 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1566339 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1566339 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.649 07:36:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.194 00:27:23.194 real 0m18.056s 00:27:23.194 user 0m59.102s 00:27:23.194 sys 0m7.713s 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.194 ************************************ 00:27:23.194 END TEST nvmf_fio_host 00:27:23.194 ************************************ 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.194 ************************************ 00:27:23.194 START TEST nvmf_failover 00:27:23.194 ************************************ 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:23.194 * Looking for test storage... 00:27:23.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:23.194 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.195 --rc genhtml_branch_coverage=1 00:27:23.195 --rc genhtml_function_coverage=1 00:27:23.195 --rc genhtml_legend=1 00:27:23.195 --rc geninfo_all_blocks=1 00:27:23.195 --rc geninfo_unexecuted_blocks=1 00:27:23.195 00:27:23.195 ' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.195 --rc genhtml_branch_coverage=1 00:27:23.195 --rc genhtml_function_coverage=1 00:27:23.195 --rc genhtml_legend=1 00:27:23.195 --rc geninfo_all_blocks=1 00:27:23.195 --rc geninfo_unexecuted_blocks=1 00:27:23.195 00:27:23.195 ' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.195 --rc genhtml_branch_coverage=1 00:27:23.195 --rc genhtml_function_coverage=1 00:27:23.195 --rc genhtml_legend=1 00:27:23.195 --rc geninfo_all_blocks=1 00:27:23.195 --rc geninfo_unexecuted_blocks=1 00:27:23.195 00:27:23.195 ' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.195 --rc genhtml_branch_coverage=1 00:27:23.195 --rc genhtml_function_coverage=1 00:27:23.195 --rc genhtml_legend=1 00:27:23.195 --rc geninfo_all_blocks=1 00:27:23.195 --rc geninfo_unexecuted_blocks=1 00:27:23.195 00:27:23.195 ' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.195 07:36:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:23.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.195 07:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:31.334 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:31.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:31.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:31.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:31.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:27:31.335 00:27:31.335 --- 10.0.0.2 ping statistics --- 00:27:31.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.335 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:31.335 00:27:31.335 --- 10.0.0.1 ping statistics --- 00:27:31.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.335 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1572503 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1572503 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1572503 ']' 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.335 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.336 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.336 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.336 07:36:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.336 [2024-11-26 07:36:58.620963] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:27:31.336 [2024-11-26 07:36:58.621030] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.336 [2024-11-26 07:36:58.721369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.336 [2024-11-26 07:36:58.773514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.336 [2024-11-26 07:36:58.773562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.336 [2024-11-26 07:36:58.773571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.336 [2024-11-26 07:36:58.773579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.336 [2024-11-26 07:36:58.773585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.336 [2024-11-26 07:36:58.775386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.336 [2024-11-26 07:36:58.775708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.336 [2024-11-26 07:36:58.775710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:31.595 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.596 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:31.596 [2024-11-26 07:36:59.655833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.857 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:31.857 Malloc0 00:27:31.857 07:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.118 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.379 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.640 [2024-11-26 07:37:00.495959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.640 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:32.640 [2024-11-26 07:37:00.692520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.640 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:32.901 [2024-11-26 07:37:00.885242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1572872 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1572872 /var/tmp/bdevperf.sock 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1572872 ']' 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.901 07:37:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:33.844 07:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.844 07:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:33.844 07:37:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:34.104 NVMe0n1 00:27:34.363 07:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:34.363 00:27:34.364 07:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1573211 00:27:34.364 07:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:34.364 07:37:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:35.743 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:35.743 [2024-11-26 07:37:03.603137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.743 [2024-11-26 07:37:03.603284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 [2024-11-26 07:37:03.603406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d84f0 is same with the state(6) to be set 00:27:35.744 07:37:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:39.036 07:37:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:39.036 00:27:39.036 07:37:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:39.296 [2024-11-26 07:37:07.221449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.296 [2024-11-26 07:37:07.221485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.296 [2024-11-26 07:37:07.221492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.296 [2024-11-26 07:37:07.221497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.296 [2024-11-26 07:37:07.221501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 [2024-11-26 07:37:07.221765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9040 is same with the state(6) to be set 00:27:39.297 07:37:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:42.600 07:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.600 [2024-11-26 07:37:10.413040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.600 07:37:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:43.545 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:43.545 [2024-11-26 07:37:11.603229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 [2024-11-26 07:37:11.603324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239e4c0 is same with the state(6) to be set 00:27:43.545 07:37:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1573211 00:27:50.137 { 00:27:50.137 "results": [ 00:27:50.137 { 00:27:50.137 "job": "NVMe0n1", 00:27:50.137 "core_mask": "0x1", 00:27:50.137 "workload": "verify", 00:27:50.137 "status": "finished", 00:27:50.137 "verify_range": { 00:27:50.137 "start": 0, 00:27:50.137 "length": 16384 00:27:50.137 }, 00:27:50.137 "queue_depth": 128, 00:27:50.137 "io_size": 4096, 00:27:50.137 "runtime": 15.010544, 00:27:50.137 "iops": 12352.8501032341, 00:27:50.137 "mibps": 48.2533207157582, 00:27:50.137 "io_failed": 14141, 00:27:50.137 "io_timeout": 0, 00:27:50.137 "avg_latency_us": 9606.882964328903, 00:27:50.137 "min_latency_us": 542.72, 00:27:50.137 "max_latency_us": 21517.653333333332 00:27:50.137 } 00:27:50.137 ], 00:27:50.137 "core_count": 1 00:27:50.137 } 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1572872 ']' 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572872' 00:27:50.137 killing process with pid 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1572872 00:27:50.137 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:50.137 [2024-11-26 07:37:00.964785] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:27:50.137 [2024-11-26 07:37:00.964866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572872 ] 00:27:50.137 [2024-11-26 07:37:01.059218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.137 [2024-11-26 07:37:01.111194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.137 Running I/O for 15 seconds... 00:27:50.137 11364.00 IOPS, 44.39 MiB/s [2024-11-26T06:37:18.235Z] [2024-11-26 07:37:03.604054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.137 [2024-11-26 07:37:03.604087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.137 [2024-11-26 07:37:03.604106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.137 [2024-11-26 07:37:03.604122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.137 [2024-11-26 07:37:03.604138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3d90 is same with the state(6) to be set 00:27:50.137 [2024-11-26 07:37:03.604198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.137 [2024-11-26 07:37:03.604546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.137 [2024-11-26 07:37:03.604555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.604680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.604984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.604994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.605001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.138 [2024-11-26 07:37:03.605018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.138 [2024-11-26 07:37:03.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.138 [2024-11-26 07:37:03.605228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.139 [2024-11-26 07:37:03.605295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.139 [2024-11-26 07:37:03.605876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.139 [2024-11-26 07:37:03.605884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.605983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.605992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:03.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.140 [2024-11-26 07:37:03.606358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.140 [2024-11-26 07:37:03.606383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.140 [2024-11-26 07:37:03.606390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:27:50.140 [2024-11-26 07:37:03.606398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:03.606437] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:50.140 [2024-11-26 07:37:03.606447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:50.140 [2024-11-26 07:37:03.610009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:50.140 [2024-11-26 07:37:03.610033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3d90 (9): Bad file descriptor 00:27:50.140 [2024-11-26 07:37:03.771103] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:50.140 10368.50 IOPS, 40.50 MiB/s [2024-11-26T06:37:18.238Z] 10627.00 IOPS, 41.51 MiB/s [2024-11-26T06:37:18.238Z] 11123.00 IOPS, 43.45 MiB/s [2024-11-26T06:37:18.238Z] [2024-11-26 07:37:07.222671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.140 [2024-11-26 07:37:07.222777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.140 [2024-11-26 07:37:07.222783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.222990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.222995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.141 [2024-11-26 07:37:07.223161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.141 [2024-11-26 07:37:07.223167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.142 [2024-11-26 07:37:07.223498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.142 [2024-11-26 07:37:07.223627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.142 [2024-11-26 07:37:07.223633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.143 [2024-11-26 07:37:07.223929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.223953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.223958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.223970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.223974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.223979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.223985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.223989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.223993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.223999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.224008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.224012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.224017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.224027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.224031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.224036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.224046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.224050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.224055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.224064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.224069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.224075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.143 [2024-11-26 07:37:07.224085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.143 [2024-11-26 07:37:07.224089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83496 len:8 PRP1 0x0 PRP2 0x0 00:27:50.143 [2024-11-26 07:37:07.224094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.143 [2024-11-26 07:37:07.224099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83504 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.224272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.224276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.224285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.144 [2024-11-26 07:37:07.236537] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.144 [2024-11-26 07:37:07.236542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:27:50.144 [2024-11-26 07:37:07.236550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236589] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:50.144 [2024-11-26 07:37:07.236618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.144 [2024-11-26 07:37:07.236626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.144 [2024-11-26 07:37:07.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.144 [2024-11-26 07:37:07.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.144 [2024-11-26 07:37:07.236671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:07.236678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:50.144 [2024-11-26 07:37:07.236715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3d90 (9): Bad file descriptor 00:27:50.144 [2024-11-26 07:37:07.240002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:50.144 [2024-11-26 07:37:07.311252] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:50.144 11231.20 IOPS, 43.87 MiB/s [2024-11-26T06:37:18.242Z] 11522.67 IOPS, 45.01 MiB/s [2024-11-26T06:37:18.242Z] 11730.29 IOPS, 45.82 MiB/s [2024-11-26T06:37:18.242Z] 11919.25 IOPS, 46.56 MiB/s [2024-11-26T06:37:18.242Z] 12051.89 IOPS, 47.08 MiB/s [2024-11-26T06:37:18.242Z] [2024-11-26 07:37:11.603514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.144 [2024-11-26 07:37:11.603543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:11.603555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.144 [2024-11-26 07:37:11.603562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:11.603569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.144 [2024-11-26 07:37:11.603574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:11.603584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.144 [2024-11-26 07:37:11.603589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.144 [2024-11-26 07:37:11.603597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.144 [2024-11-26 07:37:11.603602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.145 [2024-11-26 07:37:11.603833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.603988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.603993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.604000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.604005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.604011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.604016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.604024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.604029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.604036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.145 [2024-11-26 07:37:11.604041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.145 [2024-11-26 07:37:11.604047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.146 [2024-11-26 07:37:11.604390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.146 [2024-11-26 07:37:11.604431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.146 [2024-11-26 07:37:11.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.147 [2024-11-26 07:37:11.604620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.147 [2024-11-26 07:37:11.604879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.147 [2024-11-26 07:37:11.604886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.148 [2024-11-26 07:37:11.604890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.148 [2024-11-26 07:37:11.604903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.148 [2024-11-26 07:37:11.604914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.148 [2024-11-26 07:37:11.604926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.148 [2024-11-26 07:37:11.604937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.604959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.604971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.604982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.604994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.605005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.148 [2024-11-26 07:37:11.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.148 [2024-11-26 07:37:11.605039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.148 [2024-11-26 07:37:11.605043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37296 len:8 PRP1 0x0 PRP2 0x0 00:27:50.148 [2024-11-26 07:37:11.605049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605084] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:50.148 [2024-11-26 07:37:11.605102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.148 [2024-11-26 07:37:11.605108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.148 [2024-11-26 07:37:11.605119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.148 [2024-11-26 07:37:11.605130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.148 [2024-11-26 07:37:11.605140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.148 [2024-11-26 07:37:11.605145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:50.148 [2024-11-26 07:37:11.607583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:50.148 [2024-11-26 07:37:11.607603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3d90 (9): Bad file descriptor 00:27:50.148 [2024-11-26 07:37:11.679048] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:50.148 12056.20 IOPS, 47.09 MiB/s [2024-11-26T06:37:18.246Z] 12131.73 IOPS, 47.39 MiB/s [2024-11-26T06:37:18.246Z] 12199.00 IOPS, 47.65 MiB/s [2024-11-26T06:37:18.246Z] 12263.15 IOPS, 47.90 MiB/s [2024-11-26T06:37:18.246Z] 12315.71 IOPS, 48.11 MiB/s [2024-11-26T06:37:18.246Z] 12354.07 IOPS, 48.26 MiB/s 00:27:50.148 Latency(us) 00:27:50.148 [2024-11-26T06:37:18.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.148 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:50.148 Verification LBA range: start 0x0 length 0x4000 00:27:50.148 NVMe0n1 : 15.01 12352.85 48.25 942.07 0.00 9606.88 542.72 21517.65 00:27:50.148 [2024-11-26T06:37:18.246Z] =================================================================================================================== 00:27:50.148 [2024-11-26T06:37:18.246Z] Total : 12352.85 48.25 942.07 0.00 9606.88 542.72 21517.65 00:27:50.148 Received shutdown signal, test time was about 15.000000 seconds 00:27:50.148 00:27:50.148 Latency(us) 00:27:50.148 [2024-11-26T06:37:18.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.148 [2024-11-26T06:37:18.246Z] =================================================================================================================== 00:27:50.148 [2024-11-26T06:37:18.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1576196 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1576196 /var/tmp/bdevperf.sock 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1576196 ']' 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.148 07:37:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:50.719 07:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.719 07:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:50.719 07:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:50.719 [2024-11-26 07:37:18.779204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:50.719 07:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:50.979 [2024-11-26 07:37:18.963684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:50.979 07:37:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:51.239 NVMe0n1 00:27:51.239 07:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:51.809 00:27:51.810 07:37:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:52.070 00:27:52.070 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:52.070 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:52.330 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:52.590 07:37:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:55.886 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:55.886 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:55.886 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.886 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1577240 00:27:55.886 07:37:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1577240 00:27:56.829 { 00:27:56.829 "results": [ 00:27:56.829 { 00:27:56.829 "job": "NVMe0n1", 00:27:56.829 "core_mask": "0x1", 00:27:56.829 "workload": "verify", 00:27:56.829 "status": "finished", 00:27:56.829 "verify_range": { 00:27:56.829 "start": 0, 00:27:56.829 "length": 16384 00:27:56.829 }, 00:27:56.829 "queue_depth": 128, 00:27:56.829 "io_size": 4096, 00:27:56.829 "runtime": 1.01292, 00:27:56.829 "iops": 12809.501243928444, 00:27:56.829 "mibps": 50.03711423409548, 00:27:56.829 "io_failed": 0, 00:27:56.829 "io_timeout": 0, 00:27:56.829 "avg_latency_us": 9953.938727296083, 00:27:56.829 "min_latency_us": 1979.7333333333333, 00:27:56.829 "max_latency_us": 8956.586666666666 00:27:56.829 } 00:27:56.829 ], 00:27:56.829 "core_count": 1 00:27:56.829 } 00:27:56.829 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:56.829 [2024-11-26 07:37:17.828325] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:27:56.829 [2024-11-26 07:37:17.828385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576196 ] 00:27:56.829 [2024-11-26 07:37:17.912775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.829 [2024-11-26 07:37:17.944229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.829 [2024-11-26 07:37:20.395368] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:56.829 [2024-11-26 07:37:20.395427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.829 [2024-11-26 07:37:20.395437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.829 [2024-11-26 07:37:20.395446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.829 [2024-11-26 07:37:20.395451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.829 [2024-11-26 07:37:20.395457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.829 [2024-11-26 07:37:20.395462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.829 [2024-11-26 07:37:20.395468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.829 [2024-11-26 07:37:20.395473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.829 [2024-11-26 07:37:20.395479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:56.829 [2024-11-26 07:37:20.395505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:56.829 [2024-11-26 07:37:20.395518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c2d90 (9): Bad file descriptor 00:27:56.829 [2024-11-26 07:37:20.404922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:56.829 Running I/O for 1 seconds... 00:27:56.829 12720.00 IOPS, 49.69 MiB/s 00:27:56.829 Latency(us) 00:27:56.829 [2024-11-26T06:37:24.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.829 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:56.829 Verification LBA range: start 0x0 length 0x4000 00:27:56.829 NVMe0n1 : 1.01 12809.50 50.04 0.00 0.00 9953.94 1979.73 8956.59 00:27:56.829 [2024-11-26T06:37:24.928Z] =================================================================================================================== 00:27:56.830 [2024-11-26T06:37:24.928Z] Total : 12809.50 50.04 0.00 0.00 9953.94 1979.73 8956.59 00:27:56.830 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:56.830 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:56.830 07:37:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.090 07:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:57.090 07:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:57.350 07:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.611 07:37:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1576196 ']' 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1576196' 00:28:00.913 killing process with pid 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1576196 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:00.913 07:37:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.913 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:00.913 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:01.199 rmmod nvme_tcp 00:28:01.199 rmmod nvme_fabrics 00:28:01.199 rmmod nvme_keyring 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1572503 ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1572503 ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572503' 00:28:01.199 killing process with pid 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1572503 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.199 07:37:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.811 00:28:03.811 real 0m40.570s 00:28:03.811 user 2m4.675s 00:28:03.811 sys 0m8.829s 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:03.811 ************************************ 00:28:03.811 END TEST nvmf_failover 00:28:03.811 ************************************ 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.811 ************************************ 00:28:03.811 START TEST nvmf_host_discovery 00:28:03.811 ************************************ 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:03.811 * Looking for test storage... 00:28:03.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.811 --rc genhtml_branch_coverage=1 00:28:03.811 --rc genhtml_function_coverage=1 00:28:03.811 --rc genhtml_legend=1 00:28:03.811 --rc geninfo_all_blocks=1 00:28:03.811 --rc geninfo_unexecuted_blocks=1 00:28:03.811 00:28:03.811 ' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.811 --rc genhtml_branch_coverage=1 00:28:03.811 --rc genhtml_function_coverage=1 00:28:03.811 --rc genhtml_legend=1 00:28:03.811 --rc geninfo_all_blocks=1 00:28:03.811 --rc geninfo_unexecuted_blocks=1 00:28:03.811 00:28:03.811 ' 00:28:03.811 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.812 --rc genhtml_branch_coverage=1 00:28:03.812 --rc genhtml_function_coverage=1 00:28:03.812 --rc genhtml_legend=1 00:28:03.812 --rc geninfo_all_blocks=1 00:28:03.812 --rc geninfo_unexecuted_blocks=1 00:28:03.812 00:28:03.812 ' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.812 --rc genhtml_branch_coverage=1 00:28:03.812 --rc genhtml_function_coverage=1 00:28:03.812 --rc genhtml_legend=1 00:28:03.812 --rc geninfo_all_blocks=1 00:28:03.812 --rc geninfo_unexecuted_blocks=1 00:28:03.812 00:28:03.812 ' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.812 07:37:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:11.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.953 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:11.954 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:11.954 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:11.954 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.954 07:37:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:28:11.954 00:28:11.954 --- 10.0.0.2 ping statistics --- 00:28:11.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.954 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:28:11.954 00:28:11.954 --- 10.0.0.1 ping statistics --- 00:28:11.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.954 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1582587 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1582587 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1582587 ']' 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.954 [2024-11-26 07:37:39.268650] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:28:11.954 [2024-11-26 07:37:39.268721] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.954 [2024-11-26 07:37:39.341321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.954 [2024-11-26 07:37:39.386976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.954 [2024-11-26 07:37:39.387020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.954 [2024-11-26 07:37:39.387027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.954 [2024-11-26 07:37:39.387032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.954 [2024-11-26 07:37:39.387037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.954 [2024-11-26 07:37:39.387695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.954 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 [2024-11-26 07:37:39.550899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 [2024-11-26 07:37:39.563192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 null0 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 null1 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1582611 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1582611 /tmp/host.sock 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1582611 ']' 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:11.955 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.955 07:37:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.955 [2024-11-26 07:37:39.661870] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:28:11.955 [2024-11-26 07:37:39.661934] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582611 ] 00:28:11.955 [2024-11-26 07:37:39.752752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.955 [2024-11-26 07:37:39.805463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.529 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.529 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.530 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.791 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.792 [2024-11-26 07:37:40.834405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:12.792 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:13.053 07:37:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:13.053 07:37:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:13.626 [2024-11-26 07:37:41.550149] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:13.626 [2024-11-26 07:37:41.550184] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:13.626 [2024-11-26 07:37:41.550200] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:13.626 [2024-11-26 07:37:41.637460] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:13.888 [2024-11-26 07:37:41.740413] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:13.888 [2024-11-26 07:37:41.741514] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa967a0:1 started. 00:28:13.888 [2024-11-26 07:37:41.743462] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:13.888 [2024-11-26 07:37:41.743491] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:13.888 [2024-11-26 07:37:41.748223] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa967a0 was disconnected and freed. delete nvme_qpair. 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.151 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:14.414 [2024-11-26 07:37:42.284136] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa96cd0:1 started. 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 [2024-11-26 07:37:42.330973] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa96cd0 was disconnected and freed. delete nvme_qpair. 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 [2024-11-26 07:37:42.390284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:14.414 [2024-11-26 07:37:42.390613] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:14.414 [2024-11-26 07:37:42.390639] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:14.414 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:14.415 [2024-11-26 07:37:42.478893] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:14.415 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:14.677 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.677 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:14.677 07:37:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:14.677 [2024-11-26 07:37:42.585133] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:14.677 [2024-11-26 07:37:42.585196] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:14.677 [2024-11-26 07:37:42.585208] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:14.677 [2024-11-26 07:37:42.585213] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.618 [2024-11-26 07:37:43.661883] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:15.618 [2024-11-26 07:37:43.661906] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:15.618 [2024-11-26 07:37:43.665134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.618 [2024-11-26 07:37:43.665153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.618 [2024-11-26 07:37:43.665166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.618 [2024-11-26 07:37:43.665174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.618 [2024-11-26 07:37:43.665182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.618 [2024-11-26 07:37:43.665189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.618 [2024-11-26 07:37:43.665197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.618 [2024-11-26 07:37:43.665205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.618 [2024-11-26 07:37:43.665212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:15.618 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:15.619 [2024-11-26 07:37:43.675147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.619 [2024-11-26 07:37:43.685182] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.619 [2024-11-26 07:37:43.685196] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.619 [2024-11-26 07:37:43.685201] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.685206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.619 [2024-11-26 07:37:43.685224] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.685540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.619 [2024-11-26 07:37:43.685555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.619 [2024-11-26 07:37:43.685564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.619 [2024-11-26 07:37:43.685575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.619 [2024-11-26 07:37:43.685586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.619 [2024-11-26 07:37:43.685593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.619 [2024-11-26 07:37:43.685601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.619 [2024-11-26 07:37:43.685608] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.619 [2024-11-26 07:37:43.685613] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.619 [2024-11-26 07:37:43.685618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.619 [2024-11-26 07:37:43.695255] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.619 [2024-11-26 07:37:43.695266] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.619 [2024-11-26 07:37:43.695271] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.695276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.619 [2024-11-26 07:37:43.695290] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.619 [2024-11-26 07:37:43.695587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.619 [2024-11-26 07:37:43.695594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.619 [2024-11-26 07:37:43.695606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.619 [2024-11-26 07:37:43.695616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.619 [2024-11-26 07:37:43.695623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.619 [2024-11-26 07:37:43.695630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.619 [2024-11-26 07:37:43.695636] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.619 [2024-11-26 07:37:43.695646] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.619 [2024-11-26 07:37:43.695651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:15.619 [2024-11-26 07:37:43.705321] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.619 [2024-11-26 07:37:43.705334] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.619 [2024-11-26 07:37:43.705339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.705344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.619 [2024-11-26 07:37:43.705358] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.619 [2024-11-26 07:37:43.705548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.619 [2024-11-26 07:37:43.705560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.619 [2024-11-26 07:37:43.705567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.619 [2024-11-26 07:37:43.705578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.619 [2024-11-26 07:37:43.705589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.619 [2024-11-26 07:37:43.705595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.619 [2024-11-26 07:37:43.705602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.619 [2024-11-26 07:37:43.705611] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.619 [2024-11-26 07:37:43.705615] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.619 [2024-11-26 07:37:43.705620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:15.619 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.882 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:15.882 [2024-11-26 07:37:43.715390] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.882 [2024-11-26 07:37:43.715409] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.882 [2024-11-26 07:37:43.715414] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.882 [2024-11-26 07:37:43.715419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.882 [2024-11-26 07:37:43.715435] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.882 [2024-11-26 07:37:43.715716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-11-26 07:37:43.715729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.882 [2024-11-26 07:37:43.715737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.882 [2024-11-26 07:37:43.715748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.882 [2024-11-26 07:37:43.715759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.882 [2024-11-26 07:37:43.715765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.882 [2024-11-26 07:37:43.715773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.882 [2024-11-26 07:37:43.715779] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.882 [2024-11-26 07:37:43.715784] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.882 [2024-11-26 07:37:43.715789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.882 [2024-11-26 07:37:43.725467] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.882 [2024-11-26 07:37:43.725478] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.882 [2024-11-26 07:37:43.725483] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.882 [2024-11-26 07:37:43.725487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.882 [2024-11-26 07:37:43.725501] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.882 [2024-11-26 07:37:43.725782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-11-26 07:37:43.725794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.883 [2024-11-26 07:37:43.725801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.883 [2024-11-26 07:37:43.725812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.883 [2024-11-26 07:37:43.725822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.883 [2024-11-26 07:37:43.725829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.883 [2024-11-26 07:37:43.725836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.883 [2024-11-26 07:37:43.725842] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.883 [2024-11-26 07:37:43.725847] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.883 [2024-11-26 07:37:43.725852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.883 [2024-11-26 07:37:43.735532] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.883 [2024-11-26 07:37:43.735543] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.883 [2024-11-26 07:37:43.735548] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.883 [2024-11-26 07:37:43.735552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.883 [2024-11-26 07:37:43.735565] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.883 [2024-11-26 07:37:43.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-11-26 07:37:43.735859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.883 [2024-11-26 07:37:43.735866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.883 [2024-11-26 07:37:43.735877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.883 [2024-11-26 07:37:43.735888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.883 [2024-11-26 07:37:43.735894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.883 [2024-11-26 07:37:43.735901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.883 [2024-11-26 07:37:43.735908] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.883 [2024-11-26 07:37:43.735912] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.883 [2024-11-26 07:37:43.735917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.883 [2024-11-26 07:37:43.745595] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:15.883 [2024-11-26 07:37:43.745606] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:15.883 [2024-11-26 07:37:43.745611] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:15.883 [2024-11-26 07:37:43.745616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.883 [2024-11-26 07:37:43.745629] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:15.883 [2024-11-26 07:37:43.745906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-11-26 07:37:43.745917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa66e10 with addr=10.0.0.2, port=4420 00:28:15.883 [2024-11-26 07:37:43.745924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa66e10 is same with the state(6) to be set 00:28:15.883 [2024-11-26 07:37:43.745935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66e10 (9): Bad file descriptor 00:28:15.883 [2024-11-26 07:37:43.745945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.883 [2024-11-26 07:37:43.745952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.883 [2024-11-26 07:37:43.745959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.883 [2024-11-26 07:37:43.745965] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:15.883 [2024-11-26 07:37:43.745969] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:15.883 [2024-11-26 07:37:43.745977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.883 [2024-11-26 07:37:43.750181] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:15.883 [2024-11-26 07:37:43.750198] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.883 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.884 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.147 07:37:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.147 07:37:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.089 [2024-11-26 07:37:45.070082] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:17.089 [2024-11-26 07:37:45.070095] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:17.089 [2024-11-26 07:37:45.070104] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:17.089 [2024-11-26 07:37:45.159364] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:17.350 [2024-11-26 07:37:45.221018] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:17.350 [2024-11-26 07:37:45.221683] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa78050:1 started. 00:28:17.350 [2024-11-26 07:37:45.222986] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:17.350 [2024-11-26 07:37:45.223007] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:17.350 [2024-11-26 07:37:45.226545] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa78050 was disconnected and freed. delete nvme_qpair. 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.350 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.350 request: 00:28:17.350 { 00:28:17.350 "name": "nvme", 00:28:17.351 "trtype": "tcp", 00:28:17.351 "traddr": "10.0.0.2", 00:28:17.351 "adrfam": "ipv4", 00:28:17.351 "trsvcid": "8009", 00:28:17.351 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.351 "wait_for_attach": true, 00:28:17.351 "method": "bdev_nvme_start_discovery", 00:28:17.351 "req_id": 1 00:28:17.351 } 00:28:17.351 Got JSON-RPC error response 00:28:17.351 response: 00:28:17.351 { 00:28:17.351 "code": -17, 00:28:17.351 "message": "File exists" 00:28:17.351 } 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.351 request: 00:28:17.351 { 00:28:17.351 "name": "nvme_second", 00:28:17.351 "trtype": "tcp", 00:28:17.351 "traddr": "10.0.0.2", 00:28:17.351 "adrfam": "ipv4", 00:28:17.351 "trsvcid": "8009", 00:28:17.351 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:17.351 "wait_for_attach": true, 00:28:17.351 "method": "bdev_nvme_start_discovery", 00:28:17.351 "req_id": 1 00:28:17.351 } 00:28:17.351 Got JSON-RPC error response 00:28:17.351 response: 00:28:17.351 { 00:28:17.351 "code": -17, 00:28:17.351 "message": "File exists" 00:28:17.351 } 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.351 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.611 07:37:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.553 [2024-11-26 07:37:46.478350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:18.553 [2024-11-26 07:37:46.478373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa97890 with addr=10.0.0.2, port=8010 00:28:18.553 [2024-11-26 07:37:46.478382] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:18.553 [2024-11-26 07:37:46.478388] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:18.553 [2024-11-26 07:37:46.478393] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:19.497 [2024-11-26 07:37:47.480755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.497 [2024-11-26 07:37:47.480773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa97890 with addr=10.0.0.2, port=8010 00:28:19.497 [2024-11-26 07:37:47.480781] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:19.497 [2024-11-26 07:37:47.480786] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:19.498 [2024-11-26 07:37:47.480790] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:20.441 [2024-11-26 07:37:48.482757] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:20.441 request: 00:28:20.441 { 00:28:20.441 "name": "nvme_second", 00:28:20.441 "trtype": "tcp", 00:28:20.441 "traddr": "10.0.0.2", 00:28:20.441 "adrfam": "ipv4", 00:28:20.441 "trsvcid": "8010", 00:28:20.441 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:20.441 "wait_for_attach": false, 00:28:20.441 "attach_timeout_ms": 3000, 00:28:20.441 "method": "bdev_nvme_start_discovery", 00:28:20.441 "req_id": 1 00:28:20.441 } 00:28:20.441 Got JSON-RPC error response 00:28:20.441 response: 00:28:20.441 { 00:28:20.441 "code": -110, 00:28:20.441 "message": "Connection timed out" 00:28:20.441 } 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:20.441 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1582611 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:20.702 rmmod nvme_tcp 00:28:20.702 rmmod nvme_fabrics 00:28:20.702 rmmod nvme_keyring 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1582587 ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1582587 ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1582587' 00:28:20.702 killing process with pid 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1582587 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.702 07:37:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.246 00:28:23.246 real 0m19.431s 00:28:23.246 user 0m22.239s 00:28:23.246 sys 0m7.194s 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.246 ************************************ 00:28:23.246 END TEST nvmf_host_discovery 00:28:23.246 ************************************ 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.246 ************************************ 00:28:23.246 START TEST nvmf_host_multipath_status 00:28:23.246 ************************************ 00:28:23.246 07:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:23.246 * Looking for test storage... 00:28:23.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:23.246 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:23.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.246 --rc genhtml_branch_coverage=1 00:28:23.246 --rc genhtml_function_coverage=1 00:28:23.246 --rc genhtml_legend=1 00:28:23.246 --rc geninfo_all_blocks=1 00:28:23.246 --rc geninfo_unexecuted_blocks=1 00:28:23.247 00:28:23.247 ' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:23.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.247 --rc genhtml_branch_coverage=1 00:28:23.247 --rc genhtml_function_coverage=1 00:28:23.247 --rc genhtml_legend=1 00:28:23.247 --rc geninfo_all_blocks=1 00:28:23.247 --rc geninfo_unexecuted_blocks=1 00:28:23.247 00:28:23.247 ' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:23.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.247 --rc genhtml_branch_coverage=1 00:28:23.247 --rc genhtml_function_coverage=1 00:28:23.247 --rc genhtml_legend=1 00:28:23.247 --rc geninfo_all_blocks=1 00:28:23.247 --rc geninfo_unexecuted_blocks=1 00:28:23.247 00:28:23.247 ' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:23.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:23.247 --rc genhtml_branch_coverage=1 00:28:23.247 --rc genhtml_function_coverage=1 00:28:23.247 --rc genhtml_legend=1 00:28:23.247 --rc geninfo_all_blocks=1 00:28:23.247 --rc geninfo_unexecuted_blocks=1 00:28:23.247 00:28:23.247 ' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:23.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:23.247 07:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.388 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:31.389 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:31.389 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:31.389 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:31.389 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:28:31.389 00:28:31.389 --- 10.0.0.2 ping statistics --- 00:28:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.389 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:31.389 00:28:31.389 --- 10.0.0.1 ping statistics --- 00:28:31.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.389 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1588659 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1588659 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1588659 ']' 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.389 07:37:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:31.389 [2024-11-26 07:37:58.787218] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:28:31.389 [2024-11-26 07:37:58.787291] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.389 [2024-11-26 07:37:58.884367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:31.389 [2024-11-26 07:37:58.936152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.389 [2024-11-26 07:37:58.936212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.389 [2024-11-26 07:37:58.936221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.389 [2024-11-26 07:37:58.936228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.389 [2024-11-26 07:37:58.936234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.389 [2024-11-26 07:37:58.938004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.389 [2024-11-26 07:37:58.938010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:31.650 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.651 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1588659 00:28:31.651 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:31.911 [2024-11-26 07:37:59.809579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.911 07:37:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:32.171 Malloc0 00:28:32.171 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:32.432 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.432 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.693 [2024-11-26 07:38:00.645666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.693 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:32.953 [2024-11-26 07:38:00.850173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1589149 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1589149 /var/tmp/bdevperf.sock 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1589149 ']' 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.953 07:38:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:33.896 07:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.896 07:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:33.896 07:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:33.896 07:38:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:34.467 Nvme0n1 00:28:34.467 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:34.728 Nvme0n1 00:28:34.728 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:34.728 07:38:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:37.272 07:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:37.272 07:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:37.272 07:38:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:37.272 07:38:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:38.213 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:38.213 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:38.213 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.213 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:38.475 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.736 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.736 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:38.736 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.736 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:38.997 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.997 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:38.997 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.997 07:38:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:38.997 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.997 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:38.997 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.997 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:39.257 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.257 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:39.257 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:39.584 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:39.584 07:38:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:40.539 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:40.539 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:40.539 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.539 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:40.801 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:40.801 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:40.801 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.801 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:41.061 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.061 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:41.061 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:41.061 07:38:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.061 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.061 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:41.061 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.321 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:41.321 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.321 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:41.321 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.321 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:41.581 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.581 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:41.581 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.581 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:41.841 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.841 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:41.841 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:41.841 07:38:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:42.100 07:38:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:43.038 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:43.038 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:43.038 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.038 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:43.298 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.298 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:43.298 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.298 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.558 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:43.817 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:43.817 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:43.817 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:43.817 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:44.077 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.077 07:38:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:44.077 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.077 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:44.336 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.336 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:44.336 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:44.336 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:44.596 07:38:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:45.540 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:45.540 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:45.540 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.540 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:45.800 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.800 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:45.800 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.800 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:46.061 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:46.061 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:46.061 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.061 07:38:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:46.061 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.061 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:46.061 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.061 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:46.321 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.321 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:46.321 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:46.321 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.581 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:46.581 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:46.581 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:46.581 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:46.842 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:46.842 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:46.842 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:46.842 07:38:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:47.102 07:38:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:48.040 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:48.040 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:48.040 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.040 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.300 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:48.560 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.560 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.560 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.560 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:48.821 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.821 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:48.821 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.821 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:49.081 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.081 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:49.081 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:49.081 07:38:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.081 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.081 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:49.081 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:49.340 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:49.601 07:38:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:50.541 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:50.541 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:50.541 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.541 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.802 07:38:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:51.063 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.063 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:51.063 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.063 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:51.323 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.583 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.583 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:51.844 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:51.845 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:52.106 07:38:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:52.106 07:38:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.493 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:53.754 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:53.754 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:53.754 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.754 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:54.014 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.014 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:54.014 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.014 07:38:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:54.014 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.014 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:54.275 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.276 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:54.276 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.276 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:54.276 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:54.536 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:54.797 07:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:28:55.740 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:28:55.740 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:55.740 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:55.740 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:55.740 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:56.001 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:56.001 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.001 07:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:56.001 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.001 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:56.001 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.001 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:56.261 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.261 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:56.261 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.261 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:56.521 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.521 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:56.521 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.521 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:56.522 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.522 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:56.522 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:56.522 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:56.782 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:56.782 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:28:56.782 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:57.042 07:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:57.043 07:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.426 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:58.687 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.687 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:58.687 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.687 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:58.949 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.949 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:58.949 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.949 07:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:28:59.210 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:59.472 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:59.732 07:38:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:00.672 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:00.672 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:00.672 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.672 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.933 07:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:01.195 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.195 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:01.195 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.195 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.457 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1589149 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1589149 ']' 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1589149 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1589149 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1589149' 00:29:01.717 killing process with pid 1589149 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1589149 00:29:01.717 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1589149 00:29:01.717 { 00:29:01.717 "results": [ 00:29:01.717 { 00:29:01.717 "job": "Nvme0n1", 00:29:01.717 "core_mask": "0x4", 00:29:01.717 "workload": "verify", 00:29:01.717 "status": "terminated", 00:29:01.717 "verify_range": { 00:29:01.717 "start": 0, 00:29:01.718 "length": 16384 00:29:01.718 }, 00:29:01.718 "queue_depth": 128, 00:29:01.718 "io_size": 4096, 00:29:01.718 "runtime": 26.897572, 00:29:01.718 "iops": 11929.70131281738, 00:29:01.718 "mibps": 46.60039575319289, 00:29:01.718 "io_failed": 0, 00:29:01.718 "io_timeout": 0, 00:29:01.718 "avg_latency_us": 10709.702071303915, 00:29:01.718 "min_latency_us": 221.86666666666667, 00:29:01.718 "max_latency_us": 3075822.933333333 00:29:01.718 } 00:29:01.718 ], 00:29:01.718 "core_count": 1 00:29:01.718 } 00:29:02.011 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1589149 00:29:02.011 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:02.011 [2024-11-26 07:38:00.932849] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:29:02.011 [2024-11-26 07:38:00.932933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589149 ] 00:29:02.011 [2024-11-26 07:38:01.027826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.011 [2024-11-26 07:38:01.078065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.011 Running I/O for 90 seconds... 00:29:02.011 10549.00 IOPS, 41.21 MiB/s [2024-11-26T06:38:30.109Z] 10837.50 IOPS, 42.33 MiB/s [2024-11-26T06:38:30.109Z] 10956.00 IOPS, 42.80 MiB/s [2024-11-26T06:38:30.109Z] 11381.25 IOPS, 44.46 MiB/s [2024-11-26T06:38:30.109Z] 11706.00 IOPS, 45.73 MiB/s [2024-11-26T06:38:30.109Z] 11952.67 IOPS, 46.69 MiB/s [2024-11-26T06:38:30.109Z] 12082.43 IOPS, 47.20 MiB/s [2024-11-26T06:38:30.109Z] 12172.25 IOPS, 47.55 MiB/s [2024-11-26T06:38:30.109Z] 12243.11 IOPS, 47.82 MiB/s [2024-11-26T06:38:30.109Z] 12324.00 IOPS, 48.14 MiB/s [2024-11-26T06:38:30.109Z] 12392.18 IOPS, 48.41 MiB/s [2024-11-26T06:38:30.109Z] [2024-11-26 07:38:14.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.839998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.011 [2024-11-26 07:38:14.840179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.011 [2024-11-26 07:38:14.840190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.840739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.840744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.841119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.012 [2024-11-26 07:38:14.841130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.012 [2024-11-26 07:38:14.841142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.013 [2024-11-26 07:38:14.841147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.013 [2024-11-26 07:38:14.841536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.013 [2024-11-26 07:38:14.841741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.013 [2024-11-26 07:38:14.841752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.841987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.841992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.014 [2024-11-26 07:38:14.842526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.014 [2024-11-26 07:38:14.842649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.014 [2024-11-26 07:38:14.842654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.842994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.842999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.843092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.843102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.853347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.853393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.853402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.853416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.853423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.853437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.015 [2024-11-26 07:38:14.853444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.015 [2024-11-26 07:38:14.853457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.853746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.853753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.016 [2024-11-26 07:38:14.854610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.016 [2024-11-26 07:38:14.854766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.016 [2024-11-26 07:38:14.854772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.854989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.854995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.017 [2024-11-26 07:38:14.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.017 [2024-11-26 07:38:14.855549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.017 [2024-11-26 07:38:14.855562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.855824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.855831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.856432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.018 [2024-11-26 07:38:14.856455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.018 [2024-11-26 07:38:14.856912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.018 [2024-11-26 07:38:14.856919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.856934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.856941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.856954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.856961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.856974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.856981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.856994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.857547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.019 [2024-11-26 07:38:14.858242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.019 [2024-11-26 07:38:14.858255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.858262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.858282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.858302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.858322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.858343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.858546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.858559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.020 [2024-11-26 07:38:14.865605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.020 [2024-11-26 07:38:14.865725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.020 [2024-11-26 07:38:14.865733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.865983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.865993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.866511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.866520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.867380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.867410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.021 [2024-11-26 07:38:14.867436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.021 [2024-11-26 07:38:14.867462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.021 [2024-11-26 07:38:14.867489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.021 [2024-11-26 07:38:14.867514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.021 [2024-11-26 07:38:14.867540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.021 [2024-11-26 07:38:14.867565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.021 [2024-11-26 07:38:14.867586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.867982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.867999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.022 [2024-11-26 07:38:14.868581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.022 [2024-11-26 07:38:14.868598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.868803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.868812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.023 [2024-11-26 07:38:14.869849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.869877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.869903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.869929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.869954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.869980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.869997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.023 [2024-11-26 07:38:14.870287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.023 [2024-11-26 07:38:14.870295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.024 [2024-11-26 07:38:14.870474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.870986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.870995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.024 [2024-11-26 07:38:14.871179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.024 [2024-11-26 07:38:14.871197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.871350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.871359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.874437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.874469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.874499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.025 [2024-11-26 07:38:14.874534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.874991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.025 [2024-11-26 07:38:14.875359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.025 [2024-11-26 07:38:14.875369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.875987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.875997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.876019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.876029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.876049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.876059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.876078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.876088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.876108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.876119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.026 12396.50 IOPS, 48.42 MiB/s [2024-11-26T06:38:30.124Z] [2024-11-26 07:38:14.876956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.876975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.876997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.026 [2024-11-26 07:38:14.877367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.026 [2024-11-26 07:38:14.877377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.877971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.877982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.027 [2024-11-26 07:38:14.878101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.027 [2024-11-26 07:38:14.878531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.027 [2024-11-26 07:38:14.878551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.878975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.878985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.879005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.879015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.879035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.879045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.879065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.879077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.879098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.879108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.880029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.880061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.880092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.880122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.028 [2024-11-26 07:38:14.880152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.028 [2024-11-26 07:38:14.880600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.028 [2024-11-26 07:38:14.880610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.880970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.880990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.029 [2024-11-26 07:38:14.881624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.029 [2024-11-26 07:38:14.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.881654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.881664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.881684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.881694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.030 [2024-11-26 07:38:14.882826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.882979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.882993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.030 [2024-11-26 07:38:14.883300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.030 [2024-11-26 07:38:14.883308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.031 [2024-11-26 07:38:14.883351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.883983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.883997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.031 [2024-11-26 07:38:14.884761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.031 [2024-11-26 07:38:14.884776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.032 [2024-11-26 07:38:14.884783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.032 [2024-11-26 07:38:14.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.884979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.884994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.032 [2024-11-26 07:38:14.885607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.032 [2024-11-26 07:38:14.885614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.885891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.885898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.033 [2024-11-26 07:38:14.886840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.033 [2024-11-26 07:38:14.886862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.033 [2024-11-26 07:38:14.886884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.033 [2024-11-26 07:38:14.886907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.033 [2024-11-26 07:38:14.886929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.033 [2024-11-26 07:38:14.886951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.033 [2024-11-26 07:38:14.886966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.886973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.886987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.886995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.034 [2024-11-26 07:38:14.887373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.034 [2024-11-26 07:38:14.887799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.034 [2024-11-26 07:38:14.887807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.887982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.887997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.035 [2024-11-26 07:38:14.888834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.888978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.888987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.035 [2024-11-26 07:38:14.889300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.035 [2024-11-26 07:38:14.889307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.889910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.889917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.036 [2024-11-26 07:38:14.890728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.036 [2024-11-26 07:38:14.890743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.890879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.890900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.890922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.890946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.890967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.890982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.890988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.037 [2024-11-26 07:38:14.891405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.037 [2024-11-26 07:38:14.891574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.037 [2024-11-26 07:38:14.891581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.891985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.891993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.038 [2024-11-26 07:38:14.892869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.038 [2024-11-26 07:38:14.892891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.038 [2024-11-26 07:38:14.892916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.038 [2024-11-26 07:38:14.892937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.038 [2024-11-26 07:38:14.892951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.038 [2024-11-26 07:38:14.892959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.892973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.892980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.892994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.039 [2024-11-26 07:38:14.893800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.039 [2024-11-26 07:38:14.893807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.893821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.893828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.893843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.893850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.893864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.893872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.893886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.893894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.893908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.893915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.040 [2024-11-26 07:38:14.894601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.040 [2024-11-26 07:38:14.894845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.040 [2024-11-26 07:38:14.894850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.041 [2024-11-26 07:38:14.894977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.894988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.894994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.041 [2024-11-26 07:38:14.895867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.041 [2024-11-26 07:38:14.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.895987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.042 [2024-11-26 07:38:14.895992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.042 [2024-11-26 07:38:14.896386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.042 [2024-11-26 07:38:14.896396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.896603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.896995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.043 [2024-11-26 07:38:14.897470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.043 [2024-11-26 07:38:14.897475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.044 [2024-11-26 07:38:14.897490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.044 [2024-11-26 07:38:14.897506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.897983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.897990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.044 [2024-11-26 07:38:14.898114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.044 [2024-11-26 07:38:14.898323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.044 [2024-11-26 07:38:14.898329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.898660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.898670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.902400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.902442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.902458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.045 [2024-11-26 07:38:14.902474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.045 [2024-11-26 07:38:14.902863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.045 [2024-11-26 07:38:14.902876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.902991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.902996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.046 [2024-11-26 07:38:14.903588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.046 [2024-11-26 07:38:14.903600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:14.903842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.903982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.903987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:14.904226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.047 [2024-11-26 07:38:14.904233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.047 11442.92 IOPS, 44.70 MiB/s [2024-11-26T06:38:30.145Z] 10625.57 IOPS, 41.51 MiB/s [2024-11-26T06:38:30.145Z] 9917.20 IOPS, 38.74 MiB/s [2024-11-26T06:38:30.145Z] 10086.25 IOPS, 39.40 MiB/s [2024-11-26T06:38:30.145Z] 10249.82 IOPS, 40.04 MiB/s [2024-11-26T06:38:30.145Z] 10606.11 IOPS, 41.43 MiB/s [2024-11-26T06:38:30.145Z] 10929.89 IOPS, 42.69 MiB/s [2024-11-26T06:38:30.145Z] 11147.90 IOPS, 43.55 MiB/s [2024-11-26T06:38:30.145Z] 11227.48 IOPS, 43.86 MiB/s [2024-11-26T06:38:30.145Z] 11302.14 IOPS, 44.15 MiB/s [2024-11-26T06:38:30.145Z] 11497.22 IOPS, 44.91 MiB/s [2024-11-26T06:38:30.145Z] 11716.75 IOPS, 45.77 MiB/s [2024-11-26T06:38:30.145Z] [2024-11-26 07:38:27.568045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:27.568080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:27.568097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.047 [2024-11-26 07:38:27.568103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.047 [2024-11-26 07:38:27.568114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.568985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.568995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.048 [2024-11-26 07:38:27.569000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.048 [2024-11-26 07:38:27.569010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.049 [2024-11-26 07:38:27.569143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.569988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.569998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.049 [2024-11-26 07:38:27.570224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.049 [2024-11-26 07:38:27.570230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.570247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.050 [2024-11-26 07:38:27.570989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.570999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.050 [2024-11-26 07:38:27.571209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.050 [2024-11-26 07:38:27.571214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.571985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.571990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.051 [2024-11-26 07:38:27.572655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.051 [2024-11-26 07:38:27.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.051 [2024-11-26 07:38:27.572774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.572780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.572792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.572797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.573144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.573164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.573181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.573197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.573215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.573226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.573231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.574900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.574916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.574932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.574947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.574989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.574994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.575009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.575025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.575040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.575056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.575071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.575089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.052 [2024-11-26 07:38:27.575105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.575121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.052 [2024-11-26 07:38:27.575136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.052 [2024-11-26 07:38:27.575147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.575165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.575197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.575213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.575229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.576964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.576979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.576992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.576997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.577141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.577157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.577177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.053 [2024-11-26 07:38:27.577258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.053 [2024-11-26 07:38:27.577274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.053 [2024-11-26 07:38:27.577284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.577305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.577316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.577320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.577331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.577336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.588478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.588557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.588563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.590677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.590701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.590723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.054 [2024-11-26 07:38:27.590743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.590764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.054 [2024-11-26 07:38:27.590778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.054 [2024-11-26 07:38:27.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.590806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.590827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.590852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.590873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.590991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.590998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.055 [2024-11-26 07:38:27.591381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.055 [2024-11-26 07:38:27.591416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.055 [2024-11-26 07:38:27.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.592222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.592245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.592519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.592554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.592561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.593188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.056 [2024-11-26 07:38:27.593401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.593422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.056 [2024-11-26 07:38:27.593444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.056 [2024-11-26 07:38:27.593458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.057 [2024-11-26 07:38:27.593799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.593814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.593821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.594284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.594308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.594329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.594349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.057 [2024-11-26 07:38:27.594371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.057 [2024-11-26 07:38:27.594384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.594454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.594596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.594603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.595943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.595956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.595971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.595979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.595993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.596063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.596196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.596238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.596259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.058 [2024-11-26 07:38:27.596322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.058 [2024-11-26 07:38:27.596347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.058 [2024-11-26 07:38:27.596362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.596369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.596390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.596571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.596578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.598887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.598984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.598990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.599001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.599006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.599017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.599023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.599034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.599040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.599051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.059 [2024-11-26 07:38:27.599057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.059 [2024-11-26 07:38:27.599068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.059 [2024-11-26 07:38:27.599074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.599090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.599123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.599140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.599196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.599229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.599305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.599311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.600132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.060 [2024-11-26 07:38:27.600343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.600359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.600384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.600400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.060 [2024-11-26 07:38:27.600417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.060 [2024-11-26 07:38:27.600428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.600434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.600813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.600830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.600882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.600980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.600991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.600997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.601048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.601098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.601131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.601147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.601181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.061 [2024-11-26 07:38:27.602339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.061 [2024-11-26 07:38:27.602356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.061 [2024-11-26 07:38:27.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.062 [2024-11-26 07:38:27.602806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.602822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.604062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.604079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.604092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.604098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.604109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.062 [2024-11-26 07:38:27.604115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.062 [2024-11-26 07:38:27.604126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.063 [2024-11-26 07:38:27.604823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.063 [2024-11-26 07:38:27.604872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.063 [2024-11-26 07:38:27.604877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.604894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.604910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.604926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.604942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.604959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.604975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.604986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.604991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.605075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.605092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.605125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.605153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.064 [2024-11-26 07:38:27.606280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.064 [2024-11-26 07:38:27.606346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.064 [2024-11-26 07:38:27.606357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.606626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.065 [2024-11-26 07:38:27.608886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.608902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.608922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.608955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.065 [2024-11-26 07:38:27.608971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.065 [2024-11-26 07:38:27.608983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.608988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.608999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.066 [2024-11-26 07:38:27.609406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.066 [2024-11-26 07:38:27.609472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.066 [2024-11-26 07:38:27.609482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.609488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.609504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.609522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.609538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.609554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.609571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.609582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.609588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.611688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.611699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.067 [2024-11-26 07:38:27.611704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.612280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.612293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.067 [2024-11-26 07:38:27.612304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.067 [2024-11-26 07:38:27.612310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.612517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.612685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.612690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.613141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.068 [2024-11-26 07:38:27.613151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.613166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.068 [2024-11-26 07:38:27.613172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.068 [2024-11-26 07:38:27.613183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.613300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.613409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.069 [2024-11-26 07:38:27.613519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.613534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.613550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.613561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.613566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.614597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.614609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.614622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.614627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.614638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.614643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.614653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.069 [2024-11-26 07:38:27.614658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.069 [2024-11-26 07:38:27.614668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.614722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.614768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.614829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.614860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.614986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.614997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.615033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.070 [2024-11-26 07:38:27.615112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.070 [2024-11-26 07:38:27.615642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.070 [2024-11-26 07:38:27.615653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.615659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.615670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.615675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.615685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.615691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.615701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.615707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.615717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.615722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.615738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.616673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.616689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.616735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.617346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.617363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.617378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.071 [2024-11-26 07:38:27.617505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.071 [2024-11-26 07:38:27.617567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.071 [2024-11-26 07:38:27.617578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.617889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.617900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.617907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.618282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.618304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.618320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.072 [2024-11-26 07:38:27.618421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.618437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.072 [2024-11-26 07:38:27.618447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.072 [2024-11-26 07:38:27.618453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.618469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.618485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.618501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.618520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.618536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.618547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.618553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.073 [2024-11-26 07:38:27.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.073 [2024-11-26 07:38:27.619954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.073 [2024-11-26 07:38:27.619966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.619983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.619988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.620239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.620249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.620255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.621607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.621625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.074 [2024-11-26 07:38:27.621641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.621657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.621672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.074 [2024-11-26 07:38:27.621682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.074 [2024-11-26 07:38:27.621687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.621697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.621702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.621713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.621720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.621731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.621736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.621747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.621752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.075 [2024-11-26 07:38:27.622934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.075 [2024-11-26 07:38:27.622966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.075 [2024-11-26 07:38:27.622976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.622981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.622991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.622997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.623267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.623331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.076 [2024-11-26 07:38:27.625113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.076 [2024-11-26 07:38:27.625243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.076 [2024-11-26 07:38:27.625253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.625967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.625993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.625998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.626079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.077 [2024-11-26 07:38:27.626095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.077 [2024-11-26 07:38:27.626121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.077 [2024-11-26 07:38:27.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.626921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.626962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.078 [2024-11-26 07:38:27.626968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.627473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.627483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.627494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.627500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.627511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.627516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.627526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.078 [2024-11-26 07:38:27.627531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:02.078 [2024-11-26 07:38:27.627542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.627879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.627943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.628826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.628840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.628852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.079 [2024-11-26 07:38:27.628857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.628868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.628873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.628883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.628888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:02.079 [2024-11-26 07:38:27.628899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.079 [2024-11-26 07:38:27.628903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.628919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.628934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.628949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.628964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.628980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.628990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.628995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.080 [2024-11-26 07:38:27.629469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.080 [2024-11-26 07:38:27.629500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:02.080 [2024-11-26 07:38:27.629510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.629516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.629526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.629542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.629547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.629557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.629563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.629573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.629578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.629589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.629594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.630977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.630987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.630993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.631003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.081 [2024-11-26 07:38:27.631008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.631018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.081 [2024-11-26 07:38:27.631023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:02.081 [2024-11-26 07:38:27.631035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.631958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.631984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.631989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.632633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.082 [2024-11-26 07:38:27.632649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:02.082 [2024-11-26 07:38:27.632754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.082 [2024-11-26 07:38:27.632759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:02.082 11858.72 IOPS, 46.32 MiB/s [2024-11-26T06:38:30.180Z] 11894.04 IOPS, 46.46 MiB/s [2024-11-26T06:38:30.180Z] Received shutdown signal, test time was about 26.898182 seconds 00:29:02.082 00:29:02.082 Latency(us) 00:29:02.082 [2024-11-26T06:38:30.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.082 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:02.082 Verification LBA range: start 0x0 length 0x4000 00:29:02.082 Nvme0n1 : 26.90 11929.70 46.60 0.00 0.00 10709.70 221.87 3075822.93 00:29:02.082 [2024-11-26T06:38:30.180Z] =================================================================================================================== 00:29:02.082 [2024-11-26T06:38:30.180Z] Total : 11929.70 46.60 0.00 0.00 10709.70 221.87 3075822.93 00:29:02.082 07:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.345 rmmod nvme_tcp 00:29:02.345 rmmod nvme_fabrics 00:29:02.345 rmmod nvme_keyring 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1588659 ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1588659 ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588659' 00:29:02.345 killing process with pid 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1588659 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.345 07:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.893 00:29:04.893 real 0m41.494s 00:29:04.893 user 1m47.384s 00:29:04.893 sys 0m11.547s 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:04.893 ************************************ 00:29:04.893 END TEST nvmf_host_multipath_status 00:29:04.893 ************************************ 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.893 ************************************ 00:29:04.893 START TEST nvmf_discovery_remove_ifc 00:29:04.893 ************************************ 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:04.893 * Looking for test storage... 00:29:04.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.893 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:04.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.893 --rc genhtml_branch_coverage=1 00:29:04.893 --rc genhtml_function_coverage=1 00:29:04.893 --rc genhtml_legend=1 00:29:04.893 --rc geninfo_all_blocks=1 00:29:04.893 --rc geninfo_unexecuted_blocks=1 00:29:04.893 00:29:04.893 ' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.894 --rc genhtml_branch_coverage=1 00:29:04.894 --rc genhtml_function_coverage=1 00:29:04.894 --rc genhtml_legend=1 00:29:04.894 --rc geninfo_all_blocks=1 00:29:04.894 --rc geninfo_unexecuted_blocks=1 00:29:04.894 00:29:04.894 ' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.894 --rc genhtml_branch_coverage=1 00:29:04.894 --rc genhtml_function_coverage=1 00:29:04.894 --rc genhtml_legend=1 00:29:04.894 --rc geninfo_all_blocks=1 00:29:04.894 --rc geninfo_unexecuted_blocks=1 00:29:04.894 00:29:04.894 ' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:04.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.894 --rc genhtml_branch_coverage=1 00:29:04.894 --rc genhtml_function_coverage=1 00:29:04.894 --rc genhtml_legend=1 00:29:04.894 --rc geninfo_all_blocks=1 00:29:04.894 --rc geninfo_unexecuted_blocks=1 00:29:04.894 00:29:04.894 ' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:04.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:04.894 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.895 07:38:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:13.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:13.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:13.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:13.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.222 07:38:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.222 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:29:13.223 00:29:13.223 --- 10.0.0.2 ping statistics --- 00:29:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.223 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:29:13.223 00:29:13.223 --- 10.0.0.1 ping statistics --- 00:29:13.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.223 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1599040 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1599040 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1599040 ']' 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.223 07:38:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.223 [2024-11-26 07:38:40.297041] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:29:13.223 [2024-11-26 07:38:40.297110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.223 [2024-11-26 07:38:40.399462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.223 [2024-11-26 07:38:40.449557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.223 [2024-11-26 07:38:40.449607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.223 [2024-11-26 07:38:40.449616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.223 [2024-11-26 07:38:40.449623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.223 [2024-11-26 07:38:40.449629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.223 [2024-11-26 07:38:40.450411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.223 [2024-11-26 07:38:41.188666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.223 [2024-11-26 07:38:41.196999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:13.223 null0 00:29:13.223 [2024-11-26 07:38:41.228884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1599191 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1599191 /tmp/host.sock 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1599191 ']' 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:13.223 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.223 07:38:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.493 [2024-11-26 07:38:41.306653] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:29:13.493 [2024-11-26 07:38:41.306722] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599191 ] 00:29:13.493 [2024-11-26 07:38:41.398715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.493 [2024-11-26 07:38:41.452645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.066 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.327 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.327 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:14.327 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.327 07:38:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.270 [2024-11-26 07:38:43.237368] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:15.270 [2024-11-26 07:38:43.237400] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:15.270 [2024-11-26 07:38:43.237415] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:15.531 [2024-11-26 07:38:43.365824] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:15.531 [2024-11-26 07:38:43.592332] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:15.531 [2024-11-26 07:38:43.593533] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x77d3f0:1 started. 00:29:15.531 [2024-11-26 07:38:43.595314] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:15.531 [2024-11-26 07:38:43.595374] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:15.531 [2024-11-26 07:38:43.595399] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:15.531 [2024-11-26 07:38:43.595416] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:15.531 [2024-11-26 07:38:43.595439] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:15.531 [2024-11-26 07:38:43.597911] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x77d3f0 was disconnected and freed. delete nvme_qpair. 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:15.531 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:15.792 07:38:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.735 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.995 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.995 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:16.995 07:38:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:17.936 07:38:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:18.877 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.138 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:19.138 07:38:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.079 07:38:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:20.079 07:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.079 07:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:20.079 07:38:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:21.021 [2024-11-26 07:38:49.035513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:21.021 [2024-11-26 07:38:49.035548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.021 [2024-11-26 07:38:49.035558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.021 [2024-11-26 07:38:49.035565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.022 [2024-11-26 07:38:49.035571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.022 [2024-11-26 07:38:49.035577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.022 [2024-11-26 07:38:49.035582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.022 [2024-11-26 07:38:49.035587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.022 [2024-11-26 07:38:49.035593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.022 [2024-11-26 07:38:49.035602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.022 [2024-11-26 07:38:49.035607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.022 [2024-11-26 07:38:49.035613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x759c00 is same with the state(6) to be set 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:21.022 [2024-11-26 07:38:49.045535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x759c00 (9): Bad file descriptor 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:21.022 [2024-11-26 07:38:49.055567] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:21.022 [2024-11-26 07:38:49.055578] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:21.022 [2024-11-26 07:38:49.055582] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:21.022 [2024-11-26 07:38:49.055586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:21.022 [2024-11-26 07:38:49.055602] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:21.022 07:38:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:22.414 [2024-11-26 07:38:50.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:22.414 [2024-11-26 07:38:50.075422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x759c00 with addr=10.0.0.2, port=4420 00:29:22.414 [2024-11-26 07:38:50.075456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x759c00 is same with the state(6) to be set 00:29:22.414 [2024-11-26 07:38:50.075519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x759c00 (9): Bad file descriptor 00:29:22.414 [2024-11-26 07:38:50.075642] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:22.414 [2024-11-26 07:38:50.075701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:22.414 [2024-11-26 07:38:50.075725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:22.414 [2024-11-26 07:38:50.075750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:22.414 [2024-11-26 07:38:50.075771] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:22.414 [2024-11-26 07:38:50.075787] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:22.414 [2024-11-26 07:38:50.075800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:22.414 [2024-11-26 07:38:50.075823] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:22.414 [2024-11-26 07:38:50.075850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:22.414 07:38:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:22.987 [2024-11-26 07:38:51.078258] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:22.987 [2024-11-26 07:38:51.078273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:22.987 [2024-11-26 07:38:51.078282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:22.987 [2024-11-26 07:38:51.078287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:22.987 [2024-11-26 07:38:51.078292] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:22.987 [2024-11-26 07:38:51.078298] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:22.987 [2024-11-26 07:38:51.078302] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:22.987 [2024-11-26 07:38:51.078305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:22.987 [2024-11-26 07:38:51.078322] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:22.987 [2024-11-26 07:38:51.078340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.987 [2024-11-26 07:38:51.078347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.987 [2024-11-26 07:38:51.078355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.987 [2024-11-26 07:38:51.078361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.987 [2024-11-26 07:38:51.078366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.987 [2024-11-26 07:38:51.078371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.987 [2024-11-26 07:38:51.078377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.987 [2024-11-26 07:38:51.078382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.987 [2024-11-26 07:38:51.078388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.987 [2024-11-26 07:38:51.078393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.987 [2024-11-26 07:38:51.078402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:22.987 [2024-11-26 07:38:51.079181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x749340 (9): Bad file descriptor 00:29:23.248 [2024-11-26 07:38:51.080192] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:23.248 [2024-11-26 07:38:51.080200] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.248 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.509 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:23.509 07:38:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:24.450 07:38:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:25.022 [2024-11-26 07:38:53.092557] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:25.022 [2024-11-26 07:38:53.092572] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:25.022 [2024-11-26 07:38:53.092582] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:25.283 [2024-11-26 07:38:53.220955] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:25.283 [2024-11-26 07:38:53.320845] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:25.283 [2024-11-26 07:38:53.321648] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x74e120:1 started. 00:29:25.283 [2024-11-26 07:38:53.322548] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:25.283 [2024-11-26 07:38:53.322576] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:25.283 [2024-11-26 07:38:53.322591] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:25.283 [2024-11-26 07:38:53.322602] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:25.283 [2024-11-26 07:38:53.322608] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:25.283 [2024-11-26 07:38:53.371787] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x74e120 was disconnected and freed. delete nvme_qpair. 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1599191 ']' 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599191' 00:29:25.544 killing process with pid 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1599191 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.544 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.806 rmmod nvme_tcp 00:29:25.806 rmmod nvme_fabrics 00:29:25.806 rmmod nvme_keyring 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1599040 ']' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1599040 ']' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599040' 00:29:25.806 killing process with pid 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1599040 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.806 07:38:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.348 00:29:28.348 real 0m23.430s 00:29:28.348 user 0m27.603s 00:29:28.348 sys 0m7.110s 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 ************************************ 00:29:28.348 END TEST nvmf_discovery_remove_ifc 00:29:28.348 ************************************ 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.348 07:38:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.348 ************************************ 00:29:28.348 START TEST nvmf_identify_kernel_target 00:29:28.348 ************************************ 00:29:28.348 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:28.348 * Looking for test storage... 00:29:28.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.348 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:28.348 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:28.348 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:28.348 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.349 --rc genhtml_branch_coverage=1 00:29:28.349 --rc genhtml_function_coverage=1 00:29:28.349 --rc genhtml_legend=1 00:29:28.349 --rc geninfo_all_blocks=1 00:29:28.349 --rc geninfo_unexecuted_blocks=1 00:29:28.349 00:29:28.349 ' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.349 --rc genhtml_branch_coverage=1 00:29:28.349 --rc genhtml_function_coverage=1 00:29:28.349 --rc genhtml_legend=1 00:29:28.349 --rc geninfo_all_blocks=1 00:29:28.349 --rc geninfo_unexecuted_blocks=1 00:29:28.349 00:29:28.349 ' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.349 --rc genhtml_branch_coverage=1 00:29:28.349 --rc genhtml_function_coverage=1 00:29:28.349 --rc genhtml_legend=1 00:29:28.349 --rc geninfo_all_blocks=1 00:29:28.349 --rc geninfo_unexecuted_blocks=1 00:29:28.349 00:29:28.349 ' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:28.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.349 --rc genhtml_branch_coverage=1 00:29:28.349 --rc genhtml_function_coverage=1 00:29:28.349 --rc genhtml_legend=1 00:29:28.349 --rc geninfo_all_blocks=1 00:29:28.349 --rc geninfo_unexecuted_blocks=1 00:29:28.349 00:29:28.349 ' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.349 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.350 07:38:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.489 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:36.490 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:36.490 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:36.490 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:36.490 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.490 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:29:36.490 00:29:36.490 --- 10.0.0.2 ping statistics --- 00:29:36.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.490 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:29:36.491 00:29:36.491 --- 10.0.0.1 ping statistics --- 00:29:36.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.491 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:36.491 07:39:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:39.791 Waiting for block devices as requested 00:29:39.791 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:39.791 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:40.051 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:40.051 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:40.312 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:40.312 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:40.312 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:40.573 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:40.573 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:40.573 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:40.573 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:40.833 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:41.094 No valid GPT data, bailing 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:41.094 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:29:41.354 00:29:41.354 Discovery Log Number of Records 2, Generation counter 2 00:29:41.354 =====Discovery Log Entry 0====== 00:29:41.354 trtype: tcp 00:29:41.354 adrfam: ipv4 00:29:41.354 subtype: current discovery subsystem 00:29:41.354 treq: not specified, sq flow control disable supported 00:29:41.354 portid: 1 00:29:41.354 trsvcid: 4420 00:29:41.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:41.354 traddr: 10.0.0.1 00:29:41.354 eflags: none 00:29:41.354 sectype: none 00:29:41.354 =====Discovery Log Entry 1====== 00:29:41.354 trtype: tcp 00:29:41.354 adrfam: ipv4 00:29:41.354 subtype: nvme subsystem 00:29:41.354 treq: not specified, sq flow control disable supported 00:29:41.354 portid: 1 00:29:41.354 trsvcid: 4420 00:29:41.354 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:41.354 traddr: 10.0.0.1 00:29:41.354 eflags: none 00:29:41.354 sectype: none 00:29:41.354 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:41.354 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:41.354 ===================================================== 00:29:41.354 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:41.354 ===================================================== 00:29:41.354 Controller Capabilities/Features 00:29:41.354 ================================ 00:29:41.355 Vendor ID: 0000 00:29:41.355 Subsystem Vendor ID: 0000 00:29:41.355 Serial Number: 219a925535e55bb7c7a8 00:29:41.355 Model Number: Linux 00:29:41.355 Firmware Version: 6.8.9-20 00:29:41.355 Recommended Arb Burst: 0 00:29:41.355 IEEE OUI Identifier: 00 00 00 00:29:41.355 Multi-path I/O 00:29:41.355 May have multiple subsystem ports: No 00:29:41.355 May have multiple controllers: No 00:29:41.355 Associated with SR-IOV VF: No 00:29:41.355 Max Data Transfer Size: Unlimited 00:29:41.355 Max Number of Namespaces: 0 00:29:41.355 Max Number of I/O Queues: 1024 00:29:41.355 NVMe Specification Version (VS): 1.3 00:29:41.355 NVMe Specification Version (Identify): 1.3 00:29:41.355 Maximum Queue Entries: 1024 00:29:41.355 Contiguous Queues Required: No 00:29:41.355 Arbitration Mechanisms Supported 00:29:41.355 Weighted Round Robin: Not Supported 00:29:41.355 Vendor Specific: Not Supported 00:29:41.355 Reset Timeout: 7500 ms 00:29:41.355 Doorbell Stride: 4 bytes 00:29:41.355 NVM Subsystem Reset: Not Supported 00:29:41.355 Command Sets Supported 00:29:41.355 NVM Command Set: Supported 00:29:41.355 Boot Partition: Not Supported 00:29:41.355 Memory Page Size Minimum: 4096 bytes 00:29:41.355 Memory Page Size Maximum: 4096 bytes 00:29:41.355 Persistent Memory Region: Not Supported 00:29:41.355 Optional Asynchronous Events Supported 00:29:41.355 Namespace Attribute Notices: Not Supported 00:29:41.355 Firmware Activation Notices: Not Supported 00:29:41.355 ANA Change Notices: Not Supported 00:29:41.355 PLE Aggregate Log Change Notices: Not Supported 00:29:41.355 LBA Status Info Alert Notices: Not Supported 00:29:41.355 EGE Aggregate Log Change Notices: Not Supported 00:29:41.355 Normal NVM Subsystem Shutdown event: Not Supported 00:29:41.355 Zone Descriptor Change Notices: Not Supported 00:29:41.355 Discovery Log Change Notices: Supported 00:29:41.355 Controller Attributes 00:29:41.355 128-bit Host Identifier: Not Supported 00:29:41.355 Non-Operational Permissive Mode: Not Supported 00:29:41.355 NVM Sets: Not Supported 00:29:41.355 Read Recovery Levels: Not Supported 00:29:41.355 Endurance Groups: Not Supported 00:29:41.355 Predictable Latency Mode: Not Supported 00:29:41.355 Traffic Based Keep ALive: Not Supported 00:29:41.355 Namespace Granularity: Not Supported 00:29:41.355 SQ Associations: Not Supported 00:29:41.355 UUID List: Not Supported 00:29:41.355 Multi-Domain Subsystem: Not Supported 00:29:41.355 Fixed Capacity Management: Not Supported 00:29:41.355 Variable Capacity Management: Not Supported 00:29:41.355 Delete Endurance Group: Not Supported 00:29:41.355 Delete NVM Set: Not Supported 00:29:41.355 Extended LBA Formats Supported: Not Supported 00:29:41.355 Flexible Data Placement Supported: Not Supported 00:29:41.355 00:29:41.355 Controller Memory Buffer Support 00:29:41.355 ================================ 00:29:41.355 Supported: No 00:29:41.355 00:29:41.355 Persistent Memory Region Support 00:29:41.355 ================================ 00:29:41.355 Supported: No 00:29:41.355 00:29:41.355 Admin Command Set Attributes 00:29:41.355 ============================ 00:29:41.355 Security Send/Receive: Not Supported 00:29:41.355 Format NVM: Not Supported 00:29:41.355 Firmware Activate/Download: Not Supported 00:29:41.355 Namespace Management: Not Supported 00:29:41.355 Device Self-Test: Not Supported 00:29:41.355 Directives: Not Supported 00:29:41.355 NVMe-MI: Not Supported 00:29:41.355 Virtualization Management: Not Supported 00:29:41.355 Doorbell Buffer Config: Not Supported 00:29:41.355 Get LBA Status Capability: Not Supported 00:29:41.355 Command & Feature Lockdown Capability: Not Supported 00:29:41.355 Abort Command Limit: 1 00:29:41.355 Async Event Request Limit: 1 00:29:41.355 Number of Firmware Slots: N/A 00:29:41.355 Firmware Slot 1 Read-Only: N/A 00:29:41.355 Firmware Activation Without Reset: N/A 00:29:41.355 Multiple Update Detection Support: N/A 00:29:41.355 Firmware Update Granularity: No Information Provided 00:29:41.355 Per-Namespace SMART Log: No 00:29:41.355 Asymmetric Namespace Access Log Page: Not Supported 00:29:41.355 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:41.355 Command Effects Log Page: Not Supported 00:29:41.355 Get Log Page Extended Data: Supported 00:29:41.355 Telemetry Log Pages: Not Supported 00:29:41.355 Persistent Event Log Pages: Not Supported 00:29:41.355 Supported Log Pages Log Page: May Support 00:29:41.355 Commands Supported & Effects Log Page: Not Supported 00:29:41.355 Feature Identifiers & Effects Log Page:May Support 00:29:41.355 NVMe-MI Commands & Effects Log Page: May Support 00:29:41.355 Data Area 4 for Telemetry Log: Not Supported 00:29:41.355 Error Log Page Entries Supported: 1 00:29:41.355 Keep Alive: Not Supported 00:29:41.355 00:29:41.355 NVM Command Set Attributes 00:29:41.355 ========================== 00:29:41.355 Submission Queue Entry Size 00:29:41.355 Max: 1 00:29:41.355 Min: 1 00:29:41.355 Completion Queue Entry Size 00:29:41.355 Max: 1 00:29:41.355 Min: 1 00:29:41.355 Number of Namespaces: 0 00:29:41.355 Compare Command: Not Supported 00:29:41.355 Write Uncorrectable Command: Not Supported 00:29:41.355 Dataset Management Command: Not Supported 00:29:41.355 Write Zeroes Command: Not Supported 00:29:41.355 Set Features Save Field: Not Supported 00:29:41.355 Reservations: Not Supported 00:29:41.355 Timestamp: Not Supported 00:29:41.355 Copy: Not Supported 00:29:41.355 Volatile Write Cache: Not Present 00:29:41.355 Atomic Write Unit (Normal): 1 00:29:41.355 Atomic Write Unit (PFail): 1 00:29:41.355 Atomic Compare & Write Unit: 1 00:29:41.355 Fused Compare & Write: Not Supported 00:29:41.355 Scatter-Gather List 00:29:41.355 SGL Command Set: Supported 00:29:41.355 SGL Keyed: Not Supported 00:29:41.355 SGL Bit Bucket Descriptor: Not Supported 00:29:41.355 SGL Metadata Pointer: Not Supported 00:29:41.355 Oversized SGL: Not Supported 00:29:41.355 SGL Metadata Address: Not Supported 00:29:41.355 SGL Offset: Supported 00:29:41.355 Transport SGL Data Block: Not Supported 00:29:41.355 Replay Protected Memory Block: Not Supported 00:29:41.355 00:29:41.355 Firmware Slot Information 00:29:41.355 ========================= 00:29:41.355 Active slot: 0 00:29:41.355 00:29:41.355 00:29:41.355 Error Log 00:29:41.355 ========= 00:29:41.355 00:29:41.355 Active Namespaces 00:29:41.355 ================= 00:29:41.355 Discovery Log Page 00:29:41.355 ================== 00:29:41.355 Generation Counter: 2 00:29:41.355 Number of Records: 2 00:29:41.355 Record Format: 0 00:29:41.355 00:29:41.355 Discovery Log Entry 0 00:29:41.355 ---------------------- 00:29:41.355 Transport Type: 3 (TCP) 00:29:41.355 Address Family: 1 (IPv4) 00:29:41.355 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:41.355 Entry Flags: 00:29:41.355 Duplicate Returned Information: 0 00:29:41.355 Explicit Persistent Connection Support for Discovery: 0 00:29:41.355 Transport Requirements: 00:29:41.355 Secure Channel: Not Specified 00:29:41.355 Port ID: 1 (0x0001) 00:29:41.355 Controller ID: 65535 (0xffff) 00:29:41.355 Admin Max SQ Size: 32 00:29:41.355 Transport Service Identifier: 4420 00:29:41.355 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:41.355 Transport Address: 10.0.0.1 00:29:41.355 Discovery Log Entry 1 00:29:41.355 ---------------------- 00:29:41.355 Transport Type: 3 (TCP) 00:29:41.355 Address Family: 1 (IPv4) 00:29:41.355 Subsystem Type: 2 (NVM Subsystem) 00:29:41.355 Entry Flags: 00:29:41.355 Duplicate Returned Information: 0 00:29:41.355 Explicit Persistent Connection Support for Discovery: 0 00:29:41.355 Transport Requirements: 00:29:41.355 Secure Channel: Not Specified 00:29:41.355 Port ID: 1 (0x0001) 00:29:41.355 Controller ID: 65535 (0xffff) 00:29:41.355 Admin Max SQ Size: 32 00:29:41.355 Transport Service Identifier: 4420 00:29:41.355 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:41.355 Transport Address: 10.0.0.1 00:29:41.355 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.617 get_feature(0x01) failed 00:29:41.617 get_feature(0x02) failed 00:29:41.617 get_feature(0x04) failed 00:29:41.617 ===================================================== 00:29:41.617 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:41.617 ===================================================== 00:29:41.617 Controller Capabilities/Features 00:29:41.617 ================================ 00:29:41.617 Vendor ID: 0000 00:29:41.617 Subsystem Vendor ID: 0000 00:29:41.617 Serial Number: b5b115dfdb0b6c3dabc4 00:29:41.617 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:41.617 Firmware Version: 6.8.9-20 00:29:41.617 Recommended Arb Burst: 6 00:29:41.617 IEEE OUI Identifier: 00 00 00 00:29:41.617 Multi-path I/O 00:29:41.617 May have multiple subsystem ports: Yes 00:29:41.617 May have multiple controllers: Yes 00:29:41.617 Associated with SR-IOV VF: No 00:29:41.617 Max Data Transfer Size: Unlimited 00:29:41.617 Max Number of Namespaces: 1024 00:29:41.617 Max Number of I/O Queues: 128 00:29:41.617 NVMe Specification Version (VS): 1.3 00:29:41.617 NVMe Specification Version (Identify): 1.3 00:29:41.617 Maximum Queue Entries: 1024 00:29:41.617 Contiguous Queues Required: No 00:29:41.617 Arbitration Mechanisms Supported 00:29:41.617 Weighted Round Robin: Not Supported 00:29:41.617 Vendor Specific: Not Supported 00:29:41.617 Reset Timeout: 7500 ms 00:29:41.617 Doorbell Stride: 4 bytes 00:29:41.617 NVM Subsystem Reset: Not Supported 00:29:41.617 Command Sets Supported 00:29:41.617 NVM Command Set: Supported 00:29:41.617 Boot Partition: Not Supported 00:29:41.617 Memory Page Size Minimum: 4096 bytes 00:29:41.617 Memory Page Size Maximum: 4096 bytes 00:29:41.617 Persistent Memory Region: Not Supported 00:29:41.617 Optional Asynchronous Events Supported 00:29:41.617 Namespace Attribute Notices: Supported 00:29:41.617 Firmware Activation Notices: Not Supported 00:29:41.617 ANA Change Notices: Supported 00:29:41.617 PLE Aggregate Log Change Notices: Not Supported 00:29:41.617 LBA Status Info Alert Notices: Not Supported 00:29:41.617 EGE Aggregate Log Change Notices: Not Supported 00:29:41.617 Normal NVM Subsystem Shutdown event: Not Supported 00:29:41.617 Zone Descriptor Change Notices: Not Supported 00:29:41.617 Discovery Log Change Notices: Not Supported 00:29:41.617 Controller Attributes 00:29:41.617 128-bit Host Identifier: Supported 00:29:41.617 Non-Operational Permissive Mode: Not Supported 00:29:41.617 NVM Sets: Not Supported 00:29:41.617 Read Recovery Levels: Not Supported 00:29:41.617 Endurance Groups: Not Supported 00:29:41.617 Predictable Latency Mode: Not Supported 00:29:41.617 Traffic Based Keep ALive: Supported 00:29:41.617 Namespace Granularity: Not Supported 00:29:41.617 SQ Associations: Not Supported 00:29:41.617 UUID List: Not Supported 00:29:41.617 Multi-Domain Subsystem: Not Supported 00:29:41.617 Fixed Capacity Management: Not Supported 00:29:41.617 Variable Capacity Management: Not Supported 00:29:41.617 Delete Endurance Group: Not Supported 00:29:41.617 Delete NVM Set: Not Supported 00:29:41.617 Extended LBA Formats Supported: Not Supported 00:29:41.617 Flexible Data Placement Supported: Not Supported 00:29:41.617 00:29:41.617 Controller Memory Buffer Support 00:29:41.617 ================================ 00:29:41.617 Supported: No 00:29:41.617 00:29:41.617 Persistent Memory Region Support 00:29:41.617 ================================ 00:29:41.617 Supported: No 00:29:41.617 00:29:41.617 Admin Command Set Attributes 00:29:41.617 ============================ 00:29:41.617 Security Send/Receive: Not Supported 00:29:41.617 Format NVM: Not Supported 00:29:41.617 Firmware Activate/Download: Not Supported 00:29:41.617 Namespace Management: Not Supported 00:29:41.617 Device Self-Test: Not Supported 00:29:41.617 Directives: Not Supported 00:29:41.617 NVMe-MI: Not Supported 00:29:41.617 Virtualization Management: Not Supported 00:29:41.617 Doorbell Buffer Config: Not Supported 00:29:41.617 Get LBA Status Capability: Not Supported 00:29:41.617 Command & Feature Lockdown Capability: Not Supported 00:29:41.617 Abort Command Limit: 4 00:29:41.617 Async Event Request Limit: 4 00:29:41.617 Number of Firmware Slots: N/A 00:29:41.617 Firmware Slot 1 Read-Only: N/A 00:29:41.617 Firmware Activation Without Reset: N/A 00:29:41.617 Multiple Update Detection Support: N/A 00:29:41.617 Firmware Update Granularity: No Information Provided 00:29:41.617 Per-Namespace SMART Log: Yes 00:29:41.617 Asymmetric Namespace Access Log Page: Supported 00:29:41.617 ANA Transition Time : 10 sec 00:29:41.617 00:29:41.617 Asymmetric Namespace Access Capabilities 00:29:41.617 ANA Optimized State : Supported 00:29:41.617 ANA Non-Optimized State : Supported 00:29:41.617 ANA Inaccessible State : Supported 00:29:41.617 ANA Persistent Loss State : Supported 00:29:41.617 ANA Change State : Supported 00:29:41.617 ANAGRPID is not changed : No 00:29:41.617 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:41.617 00:29:41.617 ANA Group Identifier Maximum : 128 00:29:41.617 Number of ANA Group Identifiers : 128 00:29:41.617 Max Number of Allowed Namespaces : 1024 00:29:41.617 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:41.617 Command Effects Log Page: Supported 00:29:41.617 Get Log Page Extended Data: Supported 00:29:41.617 Telemetry Log Pages: Not Supported 00:29:41.617 Persistent Event Log Pages: Not Supported 00:29:41.617 Supported Log Pages Log Page: May Support 00:29:41.617 Commands Supported & Effects Log Page: Not Supported 00:29:41.617 Feature Identifiers & Effects Log Page:May Support 00:29:41.617 NVMe-MI Commands & Effects Log Page: May Support 00:29:41.617 Data Area 4 for Telemetry Log: Not Supported 00:29:41.617 Error Log Page Entries Supported: 128 00:29:41.618 Keep Alive: Supported 00:29:41.618 Keep Alive Granularity: 1000 ms 00:29:41.618 00:29:41.618 NVM Command Set Attributes 00:29:41.618 ========================== 00:29:41.618 Submission Queue Entry Size 00:29:41.618 Max: 64 00:29:41.618 Min: 64 00:29:41.618 Completion Queue Entry Size 00:29:41.618 Max: 16 00:29:41.618 Min: 16 00:29:41.618 Number of Namespaces: 1024 00:29:41.618 Compare Command: Not Supported 00:29:41.618 Write Uncorrectable Command: Not Supported 00:29:41.618 Dataset Management Command: Supported 00:29:41.618 Write Zeroes Command: Supported 00:29:41.618 Set Features Save Field: Not Supported 00:29:41.618 Reservations: Not Supported 00:29:41.618 Timestamp: Not Supported 00:29:41.618 Copy: Not Supported 00:29:41.618 Volatile Write Cache: Present 00:29:41.618 Atomic Write Unit (Normal): 1 00:29:41.618 Atomic Write Unit (PFail): 1 00:29:41.618 Atomic Compare & Write Unit: 1 00:29:41.618 Fused Compare & Write: Not Supported 00:29:41.618 Scatter-Gather List 00:29:41.618 SGL Command Set: Supported 00:29:41.618 SGL Keyed: Not Supported 00:29:41.618 SGL Bit Bucket Descriptor: Not Supported 00:29:41.618 SGL Metadata Pointer: Not Supported 00:29:41.618 Oversized SGL: Not Supported 00:29:41.618 SGL Metadata Address: Not Supported 00:29:41.618 SGL Offset: Supported 00:29:41.618 Transport SGL Data Block: Not Supported 00:29:41.618 Replay Protected Memory Block: Not Supported 00:29:41.618 00:29:41.618 Firmware Slot Information 00:29:41.618 ========================= 00:29:41.618 Active slot: 0 00:29:41.618 00:29:41.618 Asymmetric Namespace Access 00:29:41.618 =========================== 00:29:41.618 Change Count : 0 00:29:41.618 Number of ANA Group Descriptors : 1 00:29:41.618 ANA Group Descriptor : 0 00:29:41.618 ANA Group ID : 1 00:29:41.618 Number of NSID Values : 1 00:29:41.618 Change Count : 0 00:29:41.618 ANA State : 1 00:29:41.618 Namespace Identifier : 1 00:29:41.618 00:29:41.618 Commands Supported and Effects 00:29:41.618 ============================== 00:29:41.618 Admin Commands 00:29:41.618 -------------- 00:29:41.618 Get Log Page (02h): Supported 00:29:41.618 Identify (06h): Supported 00:29:41.618 Abort (08h): Supported 00:29:41.618 Set Features (09h): Supported 00:29:41.618 Get Features (0Ah): Supported 00:29:41.618 Asynchronous Event Request (0Ch): Supported 00:29:41.618 Keep Alive (18h): Supported 00:29:41.618 I/O Commands 00:29:41.618 ------------ 00:29:41.618 Flush (00h): Supported 00:29:41.618 Write (01h): Supported LBA-Change 00:29:41.618 Read (02h): Supported 00:29:41.618 Write Zeroes (08h): Supported LBA-Change 00:29:41.618 Dataset Management (09h): Supported 00:29:41.618 00:29:41.618 Error Log 00:29:41.618 ========= 00:29:41.618 Entry: 0 00:29:41.618 Error Count: 0x3 00:29:41.618 Submission Queue Id: 0x0 00:29:41.618 Command Id: 0x5 00:29:41.618 Phase Bit: 0 00:29:41.618 Status Code: 0x2 00:29:41.618 Status Code Type: 0x0 00:29:41.618 Do Not Retry: 1 00:29:41.618 Error Location: 0x28 00:29:41.618 LBA: 0x0 00:29:41.618 Namespace: 0x0 00:29:41.618 Vendor Log Page: 0x0 00:29:41.618 ----------- 00:29:41.618 Entry: 1 00:29:41.618 Error Count: 0x2 00:29:41.618 Submission Queue Id: 0x0 00:29:41.618 Command Id: 0x5 00:29:41.618 Phase Bit: 0 00:29:41.618 Status Code: 0x2 00:29:41.618 Status Code Type: 0x0 00:29:41.618 Do Not Retry: 1 00:29:41.618 Error Location: 0x28 00:29:41.618 LBA: 0x0 00:29:41.618 Namespace: 0x0 00:29:41.618 Vendor Log Page: 0x0 00:29:41.618 ----------- 00:29:41.618 Entry: 2 00:29:41.618 Error Count: 0x1 00:29:41.618 Submission Queue Id: 0x0 00:29:41.618 Command Id: 0x4 00:29:41.618 Phase Bit: 0 00:29:41.618 Status Code: 0x2 00:29:41.618 Status Code Type: 0x0 00:29:41.618 Do Not Retry: 1 00:29:41.618 Error Location: 0x28 00:29:41.618 LBA: 0x0 00:29:41.618 Namespace: 0x0 00:29:41.618 Vendor Log Page: 0x0 00:29:41.618 00:29:41.618 Number of Queues 00:29:41.618 ================ 00:29:41.618 Number of I/O Submission Queues: 128 00:29:41.618 Number of I/O Completion Queues: 128 00:29:41.618 00:29:41.618 ZNS Specific Controller Data 00:29:41.618 ============================ 00:29:41.618 Zone Append Size Limit: 0 00:29:41.618 00:29:41.618 00:29:41.618 Active Namespaces 00:29:41.618 ================= 00:29:41.618 get_feature(0x05) failed 00:29:41.618 Namespace ID:1 00:29:41.618 Command Set Identifier: NVM (00h) 00:29:41.618 Deallocate: Supported 00:29:41.618 Deallocated/Unwritten Error: Not Supported 00:29:41.618 Deallocated Read Value: Unknown 00:29:41.618 Deallocate in Write Zeroes: Not Supported 00:29:41.618 Deallocated Guard Field: 0xFFFF 00:29:41.618 Flush: Supported 00:29:41.618 Reservation: Not Supported 00:29:41.618 Namespace Sharing Capabilities: Multiple Controllers 00:29:41.618 Size (in LBAs): 3750748848 (1788GiB) 00:29:41.618 Capacity (in LBAs): 3750748848 (1788GiB) 00:29:41.618 Utilization (in LBAs): 3750748848 (1788GiB) 00:29:41.618 UUID: 7df32ae5-6dd3-4b9b-a72b-e0de46bc92cf 00:29:41.618 Thin Provisioning: Not Supported 00:29:41.618 Per-NS Atomic Units: Yes 00:29:41.618 Atomic Write Unit (Normal): 8 00:29:41.618 Atomic Write Unit (PFail): 8 00:29:41.618 Preferred Write Granularity: 8 00:29:41.618 Atomic Compare & Write Unit: 8 00:29:41.618 Atomic Boundary Size (Normal): 0 00:29:41.618 Atomic Boundary Size (PFail): 0 00:29:41.618 Atomic Boundary Offset: 0 00:29:41.618 NGUID/EUI64 Never Reused: No 00:29:41.618 ANA group ID: 1 00:29:41.618 Namespace Write Protected: No 00:29:41.618 Number of LBA Formats: 1 00:29:41.618 Current LBA Format: LBA Format #00 00:29:41.618 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:41.618 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.618 rmmod nvme_tcp 00:29:41.618 rmmod nvme_fabrics 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.618 07:39:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.532 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.532 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:43.532 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:43.532 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:43.793 07:39:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:47.093 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:47.093 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:47.093 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:47.356 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:47.928 00:29:47.928 real 0m19.721s 00:29:47.928 user 0m5.402s 00:29:47.928 sys 0m11.351s 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:47.928 ************************************ 00:29:47.928 END TEST nvmf_identify_kernel_target 00:29:47.928 ************************************ 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.928 ************************************ 00:29:47.928 START TEST nvmf_auth_host 00:29:47.928 ************************************ 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:47.928 * Looking for test storage... 00:29:47.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:47.928 07:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.191 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.191 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.191 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.191 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.191 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.192 --rc genhtml_branch_coverage=1 00:29:48.192 --rc genhtml_function_coverage=1 00:29:48.192 --rc genhtml_legend=1 00:29:48.192 --rc geninfo_all_blocks=1 00:29:48.192 --rc geninfo_unexecuted_blocks=1 00:29:48.192 00:29:48.192 ' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.192 --rc genhtml_branch_coverage=1 00:29:48.192 --rc genhtml_function_coverage=1 00:29:48.192 --rc genhtml_legend=1 00:29:48.192 --rc geninfo_all_blocks=1 00:29:48.192 --rc geninfo_unexecuted_blocks=1 00:29:48.192 00:29:48.192 ' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.192 --rc genhtml_branch_coverage=1 00:29:48.192 --rc genhtml_function_coverage=1 00:29:48.192 --rc genhtml_legend=1 00:29:48.192 --rc geninfo_all_blocks=1 00:29:48.192 --rc geninfo_unexecuted_blocks=1 00:29:48.192 00:29:48.192 ' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.192 --rc genhtml_branch_coverage=1 00:29:48.192 --rc genhtml_function_coverage=1 00:29:48.192 --rc genhtml_legend=1 00:29:48.192 --rc geninfo_all_blocks=1 00:29:48.192 --rc geninfo_unexecuted_blocks=1 00:29:48.192 00:29:48.192 ' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.192 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.193 07:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.334 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.334 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:56.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:56.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:56.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:56.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.335 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:29:56.336 00:29:56.336 --- 10.0.0.2 ping statistics --- 00:29:56.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.336 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:29:56.336 00:29:56.336 --- 10.0.0.1 ping statistics --- 00:29:56.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.336 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1614135 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1614135 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1614135 ']' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.336 07:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=84fe2c9bd38763a0ddae6286b195950d 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5cJ 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 84fe2c9bd38763a0ddae6286b195950d 0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 84fe2c9bd38763a0ddae6286b195950d 0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=84fe2c9bd38763a0ddae6286b195950d 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5cJ 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5cJ 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5cJ 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=071e522a974380e80fbe7362cf8ec59212481f4a33f7ce3408074f965a289182 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.o0C 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 071e522a974380e80fbe7362cf8ec59212481f4a33f7ce3408074f965a289182 3 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 071e522a974380e80fbe7362cf8ec59212481f4a33f7ce3408074f965a289182 3 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=071e522a974380e80fbe7362cf8ec59212481f4a33f7ce3408074f965a289182 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.o0C 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.o0C 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.o0C 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1882b62336cf308358668478f553653edc4f2c1bac3b019 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lNS 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1882b62336cf308358668478f553653edc4f2c1bac3b019 0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1882b62336cf308358668478f553653edc4f2c1bac3b019 0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1882b62336cf308358668478f553653edc4f2c1bac3b019 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:56.598 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lNS 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lNS 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lNS 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b246a82fee5e53f4b682291f0e909176406cf07adfa869a9 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3oE 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b246a82fee5e53f4b682291f0e909176406cf07adfa869a9 2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b246a82fee5e53f4b682291f0e909176406cf07adfa869a9 2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b246a82fee5e53f4b682291f0e909176406cf07adfa869a9 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3oE 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3oE 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3oE 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f593d07a5a306d06dfe79ff89d9e571d 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.yy7 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f593d07a5a306d06dfe79ff89d9e571d 1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f593d07a5a306d06dfe79ff89d9e571d 1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f593d07a5a306d06dfe79ff89d9e571d 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.yy7 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.yy7 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yy7 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6adff7ff7f00b5ecb7d2fcbafae8317 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ySs 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6adff7ff7f00b5ecb7d2fcbafae8317 1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6adff7ff7f00b5ecb7d2fcbafae8317 1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6adff7ff7f00b5ecb7d2fcbafae8317 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ySs 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ySs 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ySs 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b43db08b078fb094be6e0aa0aaf22ba9aa21e8a6bbd1dfc 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OCU 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b43db08b078fb094be6e0aa0aaf22ba9aa21e8a6bbd1dfc 2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b43db08b078fb094be6e0aa0aaf22ba9aa21e8a6bbd1dfc 2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b43db08b078fb094be6e0aa0aaf22ba9aa21e8a6bbd1dfc 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:56.861 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:57.122 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OCU 00:29:57.122 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OCU 00:29:57.122 07:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OCU 00:29:57.122 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:57.122 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:57.122 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:57.122 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c87b14fa37b4ddc6779267ceed7f451a 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.UQh 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c87b14fa37b4ddc6779267ceed7f451a 0 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c87b14fa37b4ddc6779267ceed7f451a 0 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c87b14fa37b4ddc6779267ceed7f451a 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.UQh 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.UQh 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UQh 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec6fc014e0d09bbf6fd90e551cc7d7242caca205d28e3ebb53a3563062d5cc0c 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oSs 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec6fc014e0d09bbf6fd90e551cc7d7242caca205d28e3ebb53a3563062d5cc0c 3 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec6fc014e0d09bbf6fd90e551cc7d7242caca205d28e3ebb53a3563062d5cc0c 3 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec6fc014e0d09bbf6fd90e551cc7d7242caca205d28e3ebb53a3563062d5cc0c 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oSs 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oSs 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.oSs 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1614135 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1614135 ']' 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.123 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5cJ 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.o0C ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.o0C 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lNS 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3oE ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3oE 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yy7 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ySs ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ySs 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OCU 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UQh ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UQh 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.oSs 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:57.384 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:57.385 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:57.385 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:57.385 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:57.646 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:57.646 07:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:00.946 Waiting for block devices as requested 00:30:00.946 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:00.946 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:01.205 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:01.205 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:01.205 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:01.205 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:01.464 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:01.464 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:01.465 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:30:01.724 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:30:01.724 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:30:01.984 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:30:01.984 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:30:01.984 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:30:01.984 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:30:02.243 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:30:02.243 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:30:03.183 No valid GPT data, bailing 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:30:03.183 00:30:03.183 Discovery Log Number of Records 2, Generation counter 2 00:30:03.183 =====Discovery Log Entry 0====== 00:30:03.183 trtype: tcp 00:30:03.183 adrfam: ipv4 00:30:03.183 subtype: current discovery subsystem 00:30:03.183 treq: not specified, sq flow control disable supported 00:30:03.183 portid: 1 00:30:03.183 trsvcid: 4420 00:30:03.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:03.183 traddr: 10.0.0.1 00:30:03.183 eflags: none 00:30:03.183 sectype: none 00:30:03.183 =====Discovery Log Entry 1====== 00:30:03.183 trtype: tcp 00:30:03.183 adrfam: ipv4 00:30:03.183 subtype: nvme subsystem 00:30:03.183 treq: not specified, sq flow control disable supported 00:30:03.183 portid: 1 00:30:03.183 trsvcid: 4420 00:30:03.183 subnqn: nqn.2024-02.io.spdk:cnode0 00:30:03.183 traddr: 10.0.0.1 00:30:03.183 eflags: none 00:30:03.183 sectype: none 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.183 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.443 nvme0n1 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:03.443 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.444 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.703 nvme0n1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.703 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.963 nvme0n1 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.963 07:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.223 nvme0n1 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.223 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.482 nvme0n1 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.482 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.741 nvme0n1 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.741 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 nvme0n1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.010 07:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.270 nvme0n1 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.270 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.271 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.531 nvme0n1 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.531 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.532 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.792 nvme0n1 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.792 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.052 nvme0n1 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.052 07:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.311 nvme0n1 00:30:06.311 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.311 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.311 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.311 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.312 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.571 nvme0n1 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:06.571 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.830 nvme0n1 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.830 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.090 07:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 nvme0n1 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.349 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.609 nvme0n1 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:07.609 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.610 07:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.179 nvme0n1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.179 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.749 nvme0n1 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.749 07:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.008 nvme0n1 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.008 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.268 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.528 nvme0n1 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.528 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:09.787 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.788 07:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.047 nvme0n1 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.047 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.307 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.876 nvme0n1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.876 07:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.444 nvme0n1 00:30:11.444 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.444 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.444 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.444 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.444 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.703 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.704 07:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.273 nvme0n1 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.273 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.211 nvme0n1 00:30:13.211 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.211 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.211 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.211 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.211 07:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.211 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.211 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.212 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 nvme0n1 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.781 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.041 nvme0n1 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.041 07:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.041 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.301 nvme0n1 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:14.301 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.302 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 nvme0n1 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 nvme0n1 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.561 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.821 nvme0n1 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.821 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.081 07:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.081 nvme0n1 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.081 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.342 nvme0n1 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.342 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.603 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.604 nvme0n1 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.604 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:30:15.863 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.864 nvme0n1 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.864 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:16.123 07:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.123 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:30:16.123 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.123 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.123 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.123 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.124 nvme0n1 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.124 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:16.384 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.385 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.646 nvme0n1 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.646 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.647 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.907 nvme0n1 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.907 07:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 nvme0n1 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.167 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.428 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.690 nvme0n1 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.690 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.691 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.951 nvme0n1 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.951 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.952 07:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.952 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.523 nvme0n1 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.523 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.524 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.093 nvme0n1 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.093 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.094 07:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.354 nvme0n1 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.354 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.614 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.875 nvme0n1 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:19.875 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.136 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:20.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.137 07:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.397 nvme0n1 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.397 07:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.337 nvme0n1 00:30:21.337 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.338 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.908 nvme0n1 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.908 07:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.848 nvme0n1 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:22.848 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.849 07:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.420 nvme0n1 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.420 07:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.990 nvme0n1 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.990 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:24.250 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.251 nvme0n1 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:24.251 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.511 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:24.511 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:24.511 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:24.511 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.512 nvme0n1 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.512 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.772 nvme0n1 00:30:24.772 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.772 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.773 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.034 nvme0n1 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.034 07:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.034 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.296 nvme0n1 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.296 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.557 nvme0n1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.557 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.843 nvme0n1 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.843 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.844 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.127 nvme0n1 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.127 07:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.127 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.425 nvme0n1 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.425 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.693 nvme0n1 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.693 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.955 nvme0n1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:26.955 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.956 07:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.217 nvme0n1 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.217 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.479 nvme0n1 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.479 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.740 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 nvme0n1 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.002 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.003 07:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.265 nvme0n1 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.265 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 nvme0n1 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:28.836 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.837 07:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.407 nvme0n1 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.407 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.667 nvme0n1 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.667 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.927 07:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.187 nvme0n1 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.187 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.448 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.449 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.710 nvme0n1 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODRmZTJjOWJkMzg3NjNhMGRkYWU2Mjg2YjE5NTk1MGTM/1Ey: 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDcxZTUyMmE5NzQzODBlODBmYmU3MzYyY2Y4ZWM1OTIxMjQ4MWY0YTMzZjdjZTM0MDgwNzRmOTY1YTI4OTE4MpIlr2U=: 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:30.710 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.971 07:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 nvme0n1 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:31.545 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.546 07:39:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.116 nvme0n1 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.116 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:32.377 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.378 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.949 nvme0n1 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I0M2RiMDhiMDc4ZmIwOTRiZTZlMGFhMGFhZjIyYmE5YWEyMWU4YTZiYmQxZGZjOPM4Ig==: 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Yzg3YjE0ZmEzN2I0ZGRjNjc3OTI2N2NlZWQ3ZjQ1MWGfGLXu: 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.949 07:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.892 nvme0n1 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWM2ZmMwMTRlMGQwOWJiZjZmZDkwZTU1MWNjN2Q3MjQyY2FjYTIwNWQyOGUzZWJiNTNhMzU2MzA2MmQ1Y2MwY5dBMwk=: 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.892 07:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.464 nvme0n1 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:34.464 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.465 request: 00:30:34.465 { 00:30:34.465 "name": "nvme0", 00:30:34.465 "trtype": "tcp", 00:30:34.465 "traddr": "10.0.0.1", 00:30:34.465 "adrfam": "ipv4", 00:30:34.465 "trsvcid": "4420", 00:30:34.465 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:34.465 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:34.465 "prchk_reftag": false, 00:30:34.465 "prchk_guard": false, 00:30:34.465 "hdgst": false, 00:30:34.465 "ddgst": false, 00:30:34.465 "allow_unrecognized_csi": false, 00:30:34.465 "method": "bdev_nvme_attach_controller", 00:30:34.465 "req_id": 1 00:30:34.465 } 00:30:34.465 Got JSON-RPC error response 00:30:34.465 response: 00:30:34.465 { 00:30:34.465 "code": -5, 00:30:34.465 "message": "Input/output error" 00:30:34.465 } 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.465 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.726 request: 00:30:34.726 { 00:30:34.726 "name": "nvme0", 00:30:34.726 "trtype": "tcp", 00:30:34.726 "traddr": "10.0.0.1", 00:30:34.726 "adrfam": "ipv4", 00:30:34.726 "trsvcid": "4420", 00:30:34.726 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:34.726 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:34.726 "prchk_reftag": false, 00:30:34.726 "prchk_guard": false, 00:30:34.726 "hdgst": false, 00:30:34.726 "ddgst": false, 00:30:34.726 "dhchap_key": "key2", 00:30:34.726 "allow_unrecognized_csi": false, 00:30:34.726 "method": "bdev_nvme_attach_controller", 00:30:34.726 "req_id": 1 00:30:34.726 } 00:30:34.726 Got JSON-RPC error response 00:30:34.726 response: 00:30:34.726 { 00:30:34.726 "code": -5, 00:30:34.726 "message": "Input/output error" 00:30:34.726 } 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.726 request: 00:30:34.726 { 00:30:34.726 "name": "nvme0", 00:30:34.726 "trtype": "tcp", 00:30:34.726 "traddr": "10.0.0.1", 00:30:34.726 "adrfam": "ipv4", 00:30:34.726 "trsvcid": "4420", 00:30:34.726 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:34.726 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:34.726 "prchk_reftag": false, 00:30:34.726 "prchk_guard": false, 00:30:34.726 "hdgst": false, 00:30:34.726 "ddgst": false, 00:30:34.726 "dhchap_key": "key1", 00:30:34.726 "dhchap_ctrlr_key": "ckey2", 00:30:34.726 "allow_unrecognized_csi": false, 00:30:34.726 "method": "bdev_nvme_attach_controller", 00:30:34.726 "req_id": 1 00:30:34.726 } 00:30:34.726 Got JSON-RPC error response 00:30:34.726 response: 00:30:34.726 { 00:30:34.726 "code": -5, 00:30:34.726 "message": "Input/output error" 00:30:34.726 } 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.726 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.988 nvme0n1 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.988 07:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.988 request: 00:30:34.988 { 00:30:34.988 "name": "nvme0", 00:30:34.988 "dhchap_key": "key1", 00:30:34.988 "dhchap_ctrlr_key": "ckey2", 00:30:34.988 "method": "bdev_nvme_set_keys", 00:30:34.988 "req_id": 1 00:30:34.988 } 00:30:34.988 Got JSON-RPC error response 00:30:34.988 response: 00:30:34.988 { 00:30:34.988 "code": -13, 00:30:34.988 "message": "Permission denied" 00:30:34.988 } 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:34.988 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:35.248 07:40:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:36.190 07:40:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:37.127 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.127 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:37.127 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.127 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.127 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDE4ODJiNjIzMzZjZjMwODM1ODY2ODQ3OGY1NTM2NTNlZGM0ZjJjMWJhYzNiMDE5q3g0TQ==: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjI0NmE4MmZlZTVlNTNmNGI2ODIyOTFmMGU5MDkxNzY0MDZjZjA3YWRmYTg2OWE5NH3Swg==: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.388 nvme0n1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjU5M2QwN2E1YTMwNmQwNmRmZTc5ZmY4OWQ5ZTU3MWQJgMIV: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: ]] 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjZhZGZmN2ZmN2YwMGI1ZWNiN2QyZmNiYWZhZTgzMTcY9xUP: 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.388 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.389 request: 00:30:37.389 { 00:30:37.389 "name": "nvme0", 00:30:37.389 "dhchap_key": "key2", 00:30:37.389 "dhchap_ctrlr_key": "ckey1", 00:30:37.389 "method": "bdev_nvme_set_keys", 00:30:37.389 "req_id": 1 00:30:37.389 } 00:30:37.389 Got JSON-RPC error response 00:30:37.389 response: 00:30:37.389 { 00:30:37.389 "code": -13, 00:30:37.389 "message": "Permission denied" 00:30:37.389 } 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.389 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:37.649 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.649 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:37.649 07:40:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.590 rmmod nvme_tcp 00:30:38.590 rmmod nvme_fabrics 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1614135 ']' 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1614135 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1614135 ']' 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1614135 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1614135 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1614135' 00:30:38.590 killing process with pid 1614135 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1614135 00:30:38.590 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1614135 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.850 07:40:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.761 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.761 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:40.762 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:41.022 07:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:44.316 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:30:44.576 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:30:45.146 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5cJ /tmp/spdk.key-null.lNS /tmp/spdk.key-sha256.yy7 /tmp/spdk.key-sha384.OCU /tmp/spdk.key-sha512.oSs /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:45.146 07:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:48.445 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:48.445 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:48.445 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:49.016 00:30:49.016 real 1m0.997s 00:30:49.016 user 0m54.777s 00:30:49.016 sys 0m16.159s 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.016 ************************************ 00:30:49.016 END TEST nvmf_auth_host 00:30:49.016 ************************************ 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:49.016 ************************************ 00:30:49.016 START TEST nvmf_digest 00:30:49.016 ************************************ 00:30:49.016 07:40:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:49.016 * Looking for test storage... 00:30:49.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:49.016 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:49.276 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:49.276 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.277 --rc genhtml_branch_coverage=1 00:30:49.277 --rc genhtml_function_coverage=1 00:30:49.277 --rc genhtml_legend=1 00:30:49.277 --rc geninfo_all_blocks=1 00:30:49.277 --rc geninfo_unexecuted_blocks=1 00:30:49.277 00:30:49.277 ' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.277 --rc genhtml_branch_coverage=1 00:30:49.277 --rc genhtml_function_coverage=1 00:30:49.277 --rc genhtml_legend=1 00:30:49.277 --rc geninfo_all_blocks=1 00:30:49.277 --rc geninfo_unexecuted_blocks=1 00:30:49.277 00:30:49.277 ' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.277 --rc genhtml_branch_coverage=1 00:30:49.277 --rc genhtml_function_coverage=1 00:30:49.277 --rc genhtml_legend=1 00:30:49.277 --rc geninfo_all_blocks=1 00:30:49.277 --rc geninfo_unexecuted_blocks=1 00:30:49.277 00:30:49.277 ' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:49.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:49.277 --rc genhtml_branch_coverage=1 00:30:49.277 --rc genhtml_function_coverage=1 00:30:49.277 --rc genhtml_legend=1 00:30:49.277 --rc geninfo_all_blocks=1 00:30:49.277 --rc geninfo_unexecuted_blocks=1 00:30:49.277 00:30:49.277 ' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:49.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:49.277 07:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.409 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:57.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:57.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:57.410 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:57.410 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:30:57.410 00:30:57.410 --- 10.0.0.2 ping statistics --- 00:30:57.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.410 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:30:57.410 00:30:57.410 --- 10.0.0.1 ping statistics --- 00:30:57.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.410 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:57.410 ************************************ 00:30:57.410 START TEST nvmf_digest_clean 00:30:57.410 ************************************ 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1631136 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1631136 00:30:57.410 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1631136 ']' 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.411 07:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:57.411 [2024-11-26 07:40:24.803259] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:30:57.411 [2024-11-26 07:40:24.803323] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.411 [2024-11-26 07:40:24.886543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.411 [2024-11-26 07:40:24.937179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.411 [2024-11-26 07:40:24.937227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.411 [2024-11-26 07:40:24.937236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.411 [2024-11-26 07:40:24.937243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.411 [2024-11-26 07:40:24.937249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.411 [2024-11-26 07:40:24.937999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.671 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:57.671 null0 00:30:57.671 [2024-11-26 07:40:25.756893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.931 [2024-11-26 07:40:25.781205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1631397 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1631397 /var/tmp/bperf.sock 00:30:57.931 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1631397 ']' 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:57.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.932 07:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:57.932 [2024-11-26 07:40:25.841275] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:30:57.932 [2024-11-26 07:40:25.841339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1631397 ] 00:30:57.932 [2024-11-26 07:40:25.932180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.932 [2024-11-26 07:40:25.985181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:58.871 07:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:59.441 nvme0n1 00:30:59.441 07:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:59.441 07:40:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:59.441 Running I/O for 2 seconds... 00:31:01.320 19132.00 IOPS, 74.73 MiB/s [2024-11-26T06:40:29.418Z] 19603.50 IOPS, 76.58 MiB/s 00:31:01.320 Latency(us) 00:31:01.320 [2024-11-26T06:40:29.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.320 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:01.320 nvme0n1 : 2.01 19617.01 76.63 0.00 0.00 6516.46 2949.12 17803.95 00:31:01.320 [2024-11-26T06:40:29.418Z] =================================================================================================================== 00:31:01.320 [2024-11-26T06:40:29.418Z] Total : 19617.01 76.63 0.00 0.00 6516.46 2949.12 17803.95 00:31:01.320 { 00:31:01.320 "results": [ 00:31:01.320 { 00:31:01.320 "job": "nvme0n1", 00:31:01.320 "core_mask": "0x2", 00:31:01.320 "workload": "randread", 00:31:01.320 "status": "finished", 00:31:01.320 "queue_depth": 128, 00:31:01.320 "io_size": 4096, 00:31:01.321 "runtime": 2.005148, 00:31:01.321 "iops": 19617.0058270013, 00:31:01.321 "mibps": 76.62892901172383, 00:31:01.321 "io_failed": 0, 00:31:01.321 "io_timeout": 0, 00:31:01.321 "avg_latency_us": 6516.462308546248, 00:31:01.321 "min_latency_us": 2949.12, 00:31:01.321 "max_latency_us": 17803.946666666667 00:31:01.321 } 00:31:01.321 ], 00:31:01.321 "core_count": 1 00:31:01.321 } 00:31:01.321 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:01.321 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:01.321 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:01.321 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:01.321 | select(.opcode=="crc32c") 00:31:01.321 | "\(.module_name) \(.executed)"' 00:31:01.321 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1631397 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1631397 ']' 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1631397 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631397 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631397' 00:31:01.580 killing process with pid 1631397 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1631397 00:31:01.580 Received shutdown signal, test time was about 2.000000 seconds 00:31:01.580 00:31:01.580 Latency(us) 00:31:01.580 [2024-11-26T06:40:29.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.580 [2024-11-26T06:40:29.678Z] =================================================================================================================== 00:31:01.580 [2024-11-26T06:40:29.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.580 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1631397 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1632171 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1632171 /var/tmp/bperf.sock 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1632171 ']' 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:01.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.839 07:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:01.839 [2024-11-26 07:40:29.808584] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:01.839 [2024-11-26 07:40:29.808656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632171 ] 00:31:01.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:01.839 Zero copy mechanism will not be used. 00:31:01.839 [2024-11-26 07:40:29.904081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.098 [2024-11-26 07:40:29.939451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.668 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.668 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:02.668 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:02.668 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:02.668 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:02.928 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:02.928 07:40:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:03.188 nvme0n1 00:31:03.188 07:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:03.188 07:40:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:03.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:03.447 Zero copy mechanism will not be used. 00:31:03.447 Running I/O for 2 seconds... 00:31:05.328 4116.00 IOPS, 514.50 MiB/s [2024-11-26T06:40:33.426Z] 3581.00 IOPS, 447.62 MiB/s 00:31:05.328 Latency(us) 00:31:05.328 [2024-11-26T06:40:33.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.328 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:05.328 nvme0n1 : 2.00 3580.19 447.52 0.00 0.00 4466.00 525.65 15073.28 00:31:05.328 [2024-11-26T06:40:33.426Z] =================================================================================================================== 00:31:05.328 [2024-11-26T06:40:33.426Z] Total : 3580.19 447.52 0.00 0.00 4466.00 525.65 15073.28 00:31:05.328 { 00:31:05.328 "results": [ 00:31:05.328 { 00:31:05.328 "job": "nvme0n1", 00:31:05.328 "core_mask": "0x2", 00:31:05.328 "workload": "randread", 00:31:05.328 "status": "finished", 00:31:05.328 "queue_depth": 16, 00:31:05.328 "io_size": 131072, 00:31:05.328 "runtime": 2.004921, 00:31:05.328 "iops": 3580.1909401916582, 00:31:05.328 "mibps": 447.5238675239573, 00:31:05.328 "io_failed": 0, 00:31:05.328 "io_timeout": 0, 00:31:05.328 "avg_latency_us": 4466.001671774869, 00:31:05.328 "min_latency_us": 525.6533333333333, 00:31:05.328 "max_latency_us": 15073.28 00:31:05.328 } 00:31:05.328 ], 00:31:05.328 "core_count": 1 00:31:05.328 } 00:31:05.328 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:05.328 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:05.328 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:05.328 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:05.328 | select(.opcode=="crc32c") 00:31:05.328 | "\(.module_name) \(.executed)"' 00:31:05.328 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:05.589 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1632171 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1632171 ']' 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1632171 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632171 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632171' 00:31:05.590 killing process with pid 1632171 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1632171 00:31:05.590 Received shutdown signal, test time was about 2.000000 seconds 00:31:05.590 00:31:05.590 Latency(us) 00:31:05.590 [2024-11-26T06:40:33.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.590 [2024-11-26T06:40:33.688Z] =================================================================================================================== 00:31:05.590 [2024-11-26T06:40:33.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1632171 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:31:05.590 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1632856 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1632856 /var/tmp/bperf.sock 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1632856 ']' 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:05.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.851 07:40:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:05.851 [2024-11-26 07:40:33.732415] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:05.851 [2024-11-26 07:40:33.732470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632856 ] 00:31:05.851 [2024-11-26 07:40:33.815724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.851 [2024-11-26 07:40:33.844525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:06.862 07:40:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:07.122 nvme0n1 00:31:07.122 07:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:07.122 07:40:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:07.122 Running I/O for 2 seconds... 00:31:09.450 29477.00 IOPS, 115.14 MiB/s [2024-11-26T06:40:37.548Z] 29622.50 IOPS, 115.71 MiB/s 00:31:09.450 Latency(us) 00:31:09.450 [2024-11-26T06:40:37.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.450 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.450 nvme0n1 : 2.01 29624.10 115.72 0.00 0.00 4313.78 2184.53 13926.40 00:31:09.450 [2024-11-26T06:40:37.548Z] =================================================================================================================== 00:31:09.450 [2024-11-26T06:40:37.548Z] Total : 29624.10 115.72 0.00 0.00 4313.78 2184.53 13926.40 00:31:09.450 { 00:31:09.450 "results": [ 00:31:09.450 { 00:31:09.450 "job": "nvme0n1", 00:31:09.450 "core_mask": "0x2", 00:31:09.450 "workload": "randwrite", 00:31:09.450 "status": "finished", 00:31:09.450 "queue_depth": 128, 00:31:09.450 "io_size": 4096, 00:31:09.450 "runtime": 2.005563, 00:31:09.450 "iops": 29624.100564280456, 00:31:09.450 "mibps": 115.71914282922053, 00:31:09.450 "io_failed": 0, 00:31:09.450 "io_timeout": 0, 00:31:09.450 "avg_latency_us": 4313.781616817868, 00:31:09.450 "min_latency_us": 2184.5333333333333, 00:31:09.450 "max_latency_us": 13926.4 00:31:09.450 } 00:31:09.450 ], 00:31:09.450 "core_count": 1 00:31:09.450 } 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:09.450 | select(.opcode=="crc32c") 00:31:09.450 | "\(.module_name) \(.executed)"' 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1632856 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1632856 ']' 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1632856 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632856 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632856' 00:31:09.450 killing process with pid 1632856 00:31:09.450 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1632856 00:31:09.450 Received shutdown signal, test time was about 2.000000 seconds 00:31:09.450 00:31:09.450 Latency(us) 00:31:09.450 [2024-11-26T06:40:37.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.450 [2024-11-26T06:40:37.548Z] =================================================================================================================== 00:31:09.450 [2024-11-26T06:40:37.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.451 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1632856 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1633543 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1633543 /var/tmp/bperf.sock 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1633543 ']' 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:09.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.712 07:40:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:09.712 [2024-11-26 07:40:37.624308] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:09.712 [2024-11-26 07:40:37.624383] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633543 ] 00:31:09.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:09.712 Zero copy mechanism will not be used. 00:31:09.712 [2024-11-26 07:40:37.709761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.712 [2024-11-26 07:40:37.739321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.655 07:40:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:10.914 nvme0n1 00:31:11.180 07:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:31:11.180 07:40:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:11.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:11.180 Zero copy mechanism will not be used. 00:31:11.180 Running I/O for 2 seconds... 00:31:13.065 6452.00 IOPS, 806.50 MiB/s [2024-11-26T06:40:41.163Z] 5716.50 IOPS, 714.56 MiB/s 00:31:13.065 Latency(us) 00:31:13.065 [2024-11-26T06:40:41.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.065 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:13.065 nvme0n1 : 2.01 5709.05 713.63 0.00 0.00 2796.74 1105.92 11905.71 00:31:13.065 [2024-11-26T06:40:41.163Z] =================================================================================================================== 00:31:13.065 [2024-11-26T06:40:41.163Z] Total : 5709.05 713.63 0.00 0.00 2796.74 1105.92 11905.71 00:31:13.065 { 00:31:13.065 "results": [ 00:31:13.065 { 00:31:13.065 "job": "nvme0n1", 00:31:13.065 "core_mask": "0x2", 00:31:13.065 "workload": "randwrite", 00:31:13.065 "status": "finished", 00:31:13.065 "queue_depth": 16, 00:31:13.065 "io_size": 131072, 00:31:13.065 "runtime": 2.005411, 00:31:13.065 "iops": 5709.05415398639, 00:31:13.065 "mibps": 713.6317692482987, 00:31:13.065 "io_failed": 0, 00:31:13.065 "io_timeout": 0, 00:31:13.065 "avg_latency_us": 2796.739665181821, 00:31:13.065 "min_latency_us": 1105.92, 00:31:13.065 "max_latency_us": 11905.706666666667 00:31:13.065 } 00:31:13.065 ], 00:31:13.065 "core_count": 1 00:31:13.065 } 00:31:13.065 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:31:13.065 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:31:13.065 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:31:13.065 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:31:13.065 | select(.opcode=="crc32c") 00:31:13.065 | "\(.module_name) \(.executed)"' 00:31:13.065 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1633543 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1633543 ']' 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1633543 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1633543 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1633543' 00:31:13.325 killing process with pid 1633543 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1633543 00:31:13.325 Received shutdown signal, test time was about 2.000000 seconds 00:31:13.325 00:31:13.325 Latency(us) 00:31:13.325 [2024-11-26T06:40:41.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.325 [2024-11-26T06:40:41.423Z] =================================================================================================================== 00:31:13.325 [2024-11-26T06:40:41.423Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.325 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1633543 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1631136 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1631136 ']' 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1631136 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631136 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631136' 00:31:13.586 killing process with pid 1631136 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1631136 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1631136 00:31:13.586 00:31:13.586 real 0m16.940s 00:31:13.586 user 0m33.410s 00:31:13.586 sys 0m3.835s 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.586 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:31:13.586 ************************************ 00:31:13.586 END TEST nvmf_digest_clean 00:31:13.586 ************************************ 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:13.847 ************************************ 00:31:13.847 START TEST nvmf_digest_error 00:31:13.847 ************************************ 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1634476 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1634476 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1634476 ']' 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.847 07:40:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:13.847 [2024-11-26 07:40:41.823659] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:13.847 [2024-11-26 07:40:41.823714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:13.847 [2024-11-26 07:40:41.913969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.108 [2024-11-26 07:40:41.944861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.108 [2024-11-26 07:40:41.944889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.108 [2024-11-26 07:40:41.944895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.108 [2024-11-26 07:40:41.944900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.108 [2024-11-26 07:40:41.944904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.108 [2024-11-26 07:40:41.945366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 [2024-11-26 07:40:42.651304] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 null0 00:31:14.678 [2024-11-26 07:40:42.729101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.678 [2024-11-26 07:40:42.753309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1634602 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1634602 /var/tmp/bperf.sock 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1634602 ']' 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:14.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.678 07:40:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:14.938 [2024-11-26 07:40:42.809138] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:14.938 [2024-11-26 07:40:42.809192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634602 ] 00:31:14.938 [2024-11-26 07:40:42.893102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.938 [2024-11-26 07:40:42.923417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:15.878 07:40:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:16.138 nvme0n1 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:16.138 07:40:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:16.138 Running I/O for 2 seconds... 00:31:16.138 [2024-11-26 07:40:44.187095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.138 [2024-11-26 07:40:44.187127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.138 [2024-11-26 07:40:44.187137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.138 [2024-11-26 07:40:44.198087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.138 [2024-11-26 07:40:44.198112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.138 [2024-11-26 07:40:44.198120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.138 [2024-11-26 07:40:44.209828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.138 [2024-11-26 07:40:44.209846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.138 [2024-11-26 07:40:44.209853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.139 [2024-11-26 07:40:44.218991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.139 [2024-11-26 07:40:44.219009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-11-26 07:40:44.219016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.139 [2024-11-26 07:40:44.227274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.139 [2024-11-26 07:40:44.227291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.139 [2024-11-26 07:40:44.227298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.236064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.236082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.236088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.245506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.245523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.245530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.254759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.254777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.262818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.262835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.262842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.272765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.272783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.272790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.280091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.280107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.280114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.399 [2024-11-26 07:40:44.289330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.399 [2024-11-26 07:40:44.289347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.399 [2024-11-26 07:40:44.289353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.298894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.298912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.298918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.308642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.308659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.308665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.317780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.317797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.317803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.326308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.326325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.326331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.335414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.335431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.335437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.343953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.343970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.343976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.353141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.353163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.353173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.361932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.361948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.361954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.371458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.371475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.380348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.380365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.380371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.389280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.389297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.389303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.398039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.398056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.398062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.405513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.405529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.405535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.416015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.416039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.424903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.424919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.424926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.433859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.433879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.433885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.442628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.442645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.442651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.452262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.452278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.452285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.460626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.460643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.460649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.470118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.470135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.470141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.478439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.478456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.478462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.400 [2024-11-26 07:40:44.487339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.400 [2024-11-26 07:40:44.487356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.400 [2024-11-26 07:40:44.487362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.495668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.495685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.495691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.506698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.506715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.506721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.515286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.515309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.525033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.525050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.525057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.533117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.533134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.533141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.543934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.543951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.543957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.552727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.552750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.561510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.561527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.561533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.570145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.570168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.570174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.578983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.579000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.579007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.587716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.587735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.587741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.597627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.597644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.597651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.609072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.609089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.609096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.616600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.616616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.616622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.625853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.625869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.625876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.661 [2024-11-26 07:40:44.635479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.661 [2024-11-26 07:40:44.635495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.661 [2024-11-26 07:40:44.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.643986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.644002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.644008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.653076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.653093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.653099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.662124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.662140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.662147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.670322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.670339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.670346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.679410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.679427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.679433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.687577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.687594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.687600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.697383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.697400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.697406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.706996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.707012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.707018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.715860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.715876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.715882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.725571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.725588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.725594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.734110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.734127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.734133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.662 [2024-11-26 07:40:44.742453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.662 [2024-11-26 07:40:44.742470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.662 [2024-11-26 07:40:44.742479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.753211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.753229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.761960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.761978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.761984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.770832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.770850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.779524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.779541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.779548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.788566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.788583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.788589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.798119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.798137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.798143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.806547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.806564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.806571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.814786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.814803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.814809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.824095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.824115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.824122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.832624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.832640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.832647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.842797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.842814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.842820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.853480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.853498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.853504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.861454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.861471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.861478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.871048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.871066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.871072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.879731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.879747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.879754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.888985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.889002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.889008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.898378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.898395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.923 [2024-11-26 07:40:44.898401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.923 [2024-11-26 07:40:44.906345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.923 [2024-11-26 07:40:44.906362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.906368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.916086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.916103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.916110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.923617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.923635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.923641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.933589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.933607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.933613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.943504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.943521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.943527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.952956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.952974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.952980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.961565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.961583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.961589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.971413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.971430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.971436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.980142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.980165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.980175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.988511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.988527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.988533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:44.998073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:44.998090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:44.998096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:16.924 [2024-11-26 07:40:45.006976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:16.924 [2024-11-26 07:40:45.006993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.924 [2024-11-26 07:40:45.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.016872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.016890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.016896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.025670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.025687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.025694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.034371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.034388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.034395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.043536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.043553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.043560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.052573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.052589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.061071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.061087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.061094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.069287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.069304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.069310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.078845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.078863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.078869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.087588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.087605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.087611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.096449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.096466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.096472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.105433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.105450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.105456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.116116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.116135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.116141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.127948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.127965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.127971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.136293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.136310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.136319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.145749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.145766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.145772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.154082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.154099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.154106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.162872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.162889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.162895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 27675.00 IOPS, 108.11 MiB/s [2024-11-26T06:40:45.283Z] [2024-11-26 07:40:45.173089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.173113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.182810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.182827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.182833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.190609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.190626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.190632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.200172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.200190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.200196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.209385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.209403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.209409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.218272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.218292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.185 [2024-11-26 07:40:45.218299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.185 [2024-11-26 07:40:45.227245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.185 [2024-11-26 07:40:45.227261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.227267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.186 [2024-11-26 07:40:45.235873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.186 [2024-11-26 07:40:45.235890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.235897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.186 [2024-11-26 07:40:45.245099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.186 [2024-11-26 07:40:45.245116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.245122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.186 [2024-11-26 07:40:45.253504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.186 [2024-11-26 07:40:45.253522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.253528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.186 [2024-11-26 07:40:45.262710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.186 [2024-11-26 07:40:45.262728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.262734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.186 [2024-11-26 07:40:45.272021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.186 [2024-11-26 07:40:45.272038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.186 [2024-11-26 07:40:45.272044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.280911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.280928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.289479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.289495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.289502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.300457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.300474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.300481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.308037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.308060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.317050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.317067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.317073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.326862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.326879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.326885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.336627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.336643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.336649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.345137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.345154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.353396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.353413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.353419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.362567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.362583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.362589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.372050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.446 [2024-11-26 07:40:45.372070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.446 [2024-11-26 07:40:45.372076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.446 [2024-11-26 07:40:45.380831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.380848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.380854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.389166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.389183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.389189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.398407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.398423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.398429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.409931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.409947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.418809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.418825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.418831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.427691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.427707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.427713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.438754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.438771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.438777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.446276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.446293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.446299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.456054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.456076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.465365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.465381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.465387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.474755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.474772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.482683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.482700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.482706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.492518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.492541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.501409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.501426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.501432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.509418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.509441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.518911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.518927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.518934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.528642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.528658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.528668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.447 [2024-11-26 07:40:45.536877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.447 [2024-11-26 07:40:45.536894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.447 [2024-11-26 07:40:45.536900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.708 [2024-11-26 07:40:45.546617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.708 [2024-11-26 07:40:45.546635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.708 [2024-11-26 07:40:45.546641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.708 [2024-11-26 07:40:45.556577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.556594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.556600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.564890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.564906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.564912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.572704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.572720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.572726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.581883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.581899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.581905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.590776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.590793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.590799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.600604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.600621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.600627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.611556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.611577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.611583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.620302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.620318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.620324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.628531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.628548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.628554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.638072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.638088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.638095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.647718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.647734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.647740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.656115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.656132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.656138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.669533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.669549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.669555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.679821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.679844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.689852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.689869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.689875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.698691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.698707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.698714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.707378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.707394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.707400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.716519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.716536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.716543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.725185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.725202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.725208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.734301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.734318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.734324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.743229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.743245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.743251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.752055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.752071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.752077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.760750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.760767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.760773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.769323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.769340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.769349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.778147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.778169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.778175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.787579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.787595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.787602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.709 [2024-11-26 07:40:45.796977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.709 [2024-11-26 07:40:45.796994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.709 [2024-11-26 07:40:45.797000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.805578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.805596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.805602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.814061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.814078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.814084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.823979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.823996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.824002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.832861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.832877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.842176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.842192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.842199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.850146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.850167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.850173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.859170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.859187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.859193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.868382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.868399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.868405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.877230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.877247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.877254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.885375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.885392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.885399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.895707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.895724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.895730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.904622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.904639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.904645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.913368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.913391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.922439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.922455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.922465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.932045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.932061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.932068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.940892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.940909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.940915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.950341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.950358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.950364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.959669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.959685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.959692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.967968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.967985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.967991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.977394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.971 [2024-11-26 07:40:45.977411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.971 [2024-11-26 07:40:45.977417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.971 [2024-11-26 07:40:45.987149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:45.987169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:45.987175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:45.994808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:45.994824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:45.994830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.005018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.005044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.014004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.014021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.014027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.023637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.023654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.023660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.031668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.031684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.031690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.042048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.042065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.042071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.049747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.049764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.049770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:17.972 [2024-11-26 07:40:46.059563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:17.972 [2024-11-26 07:40:46.059580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:17.972 [2024-11-26 07:40:46.059586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.068530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.068546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.068552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.077371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.077387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.077393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.086676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.086698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.096415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.096431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.096437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.104194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.104210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.104216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.112852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.232 [2024-11-26 07:40:46.112869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.232 [2024-11-26 07:40:46.112875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.232 [2024-11-26 07:40:46.122249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.122266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.122272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 [2024-11-26 07:40:46.131862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.131879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.131886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 [2024-11-26 07:40:46.140908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.140924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.140931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 [2024-11-26 07:40:46.149027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.149043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.149050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 [2024-11-26 07:40:46.158249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.158266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.158275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 [2024-11-26 07:40:46.167919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16a7700) 00:31:18.233 [2024-11-26 07:40:46.167935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.233 [2024-11-26 07:40:46.167941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.233 27842.00 IOPS, 108.76 MiB/s 00:31:18.233 Latency(us) 00:31:18.233 [2024-11-26T06:40:46.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:18.233 nvme0n1 : 2.00 27850.91 108.79 0.00 0.00 4591.05 2280.11 18240.85 00:31:18.233 [2024-11-26T06:40:46.331Z] =================================================================================================================== 00:31:18.233 [2024-11-26T06:40:46.331Z] Total : 27850.91 108.79 0.00 0.00 4591.05 2280.11 18240.85 00:31:18.233 { 00:31:18.233 "results": [ 00:31:18.233 { 00:31:18.233 "job": "nvme0n1", 00:31:18.233 "core_mask": "0x2", 00:31:18.233 "workload": "randread", 00:31:18.233 "status": "finished", 00:31:18.233 "queue_depth": 128, 00:31:18.233 "io_size": 4096, 00:31:18.233 "runtime": 2.003956, 00:31:18.233 "iops": 27850.910898243274, 00:31:18.233 "mibps": 108.79262069626279, 00:31:18.233 "io_failed": 0, 00:31:18.233 "io_timeout": 0, 00:31:18.233 "avg_latency_us": 4591.0491660097, 00:31:18.233 "min_latency_us": 2280.1066666666666, 00:31:18.233 "max_latency_us": 18240.853333333333 00:31:18.233 } 00:31:18.233 ], 00:31:18.233 "core_count": 1 00:31:18.233 } 00:31:18.233 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:18.233 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:18.233 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:18.233 | .driver_specific 00:31:18.233 | .nvme_error 00:31:18.233 | .status_code 00:31:18.233 | .command_transient_transport_error' 00:31:18.233 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1634602 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1634602 ']' 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1634602 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1634602 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1634602' 00:31:18.494 killing process with pid 1634602 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1634602 00:31:18.494 Received shutdown signal, test time was about 2.000000 seconds 00:31:18.494 00:31:18.494 Latency(us) 00:31:18.494 [2024-11-26T06:40:46.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.494 [2024-11-26T06:40:46.592Z] =================================================================================================================== 00:31:18.494 [2024-11-26T06:40:46.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1634602 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1635328 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1635328 /var/tmp/bperf.sock 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1635328 ']' 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:18.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.494 07:40:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:18.754 [2024-11-26 07:40:46.591004] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:18.754 [2024-11-26 07:40:46.591058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635328 ] 00:31:18.754 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:18.754 Zero copy mechanism will not be used. 00:31:18.754 [2024-11-26 07:40:46.679574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.754 [2024-11-26 07:40:46.708902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.323 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.323 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:19.323 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.323 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.584 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:19.844 nvme0n1 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:20.105 07:40:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:20.105 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:20.105 Zero copy mechanism will not be used. 00:31:20.105 Running I/O for 2 seconds... 00:31:20.105 [2024-11-26 07:40:48.049559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.049592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.049602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.059635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.059658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.059666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.066389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.066410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.066416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.077052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.077071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.077078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.088466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.088485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.088492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.098271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.098290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.098297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.107081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.107100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.107106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.117686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.117712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.128315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.128334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.128340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.135618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.135636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.135643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.146011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.146030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.146037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.152007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.152025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.152032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.158103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.158122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.158128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.165968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.165987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.165994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.173800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.173819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.173830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.182274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.182293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.182299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.105 [2024-11-26 07:40:48.191403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.105 [2024-11-26 07:40:48.191421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.105 [2024-11-26 07:40:48.191427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.201199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.201217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.201223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.206886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.206904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.206911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.217553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.217573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.217579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.226884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.226902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.226910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.238298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.238316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.238322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.249123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.249142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.249148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.257574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.257600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.257606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.266320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.266338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.266344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.271734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.271753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.271759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.280848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.280867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.280873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.288989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.289008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.289014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.299899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.299919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.299925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.305695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.305714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.305721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.313910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.313929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.313935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.323278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.323297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.323303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.333244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.333263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.333269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.344622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.344642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.344648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.354059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.354079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.367 [2024-11-26 07:40:48.354085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.367 [2024-11-26 07:40:48.364929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.367 [2024-11-26 07:40:48.364948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.364955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.374247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.374266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.374272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.380690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.380715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.389080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.389099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.389105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.400049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.400068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.400074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.410110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.410129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.410139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.421134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.421153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.421164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.432255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.432274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.432280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.442755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.442773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.442779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.368 [2024-11-26 07:40:48.453481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.368 [2024-11-26 07:40:48.453500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.368 [2024-11-26 07:40:48.453506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.464454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.464474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.464480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.475221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.475239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.475245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.485423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.485442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.485448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.495730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.495749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.495755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.506913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.506932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.506939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.518600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.518625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.629 [2024-11-26 07:40:48.531263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.629 [2024-11-26 07:40:48.531281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.629 [2024-11-26 07:40:48.531287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.543800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.543819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.543825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.553606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.553625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.553632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.563573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.563593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.563600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.574705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.574724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.574731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.583725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.583744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.583750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.595588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.595608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.595618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.606479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.606498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.617635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.617654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.629431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.629456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.641154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.641178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.641185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.652115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.652134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.652141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.663939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.663958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.663964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.674175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.674194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.674201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.684698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.684717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.684724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.695912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.695933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.695939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.706724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.706743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.706749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.630 [2024-11-26 07:40:48.718617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.630 [2024-11-26 07:40:48.718636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.630 [2024-11-26 07:40:48.718642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.730001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.730020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.730026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.738107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.738126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.738133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.749046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.749071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.760118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.760135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.760142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.769459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.892 [2024-11-26 07:40:48.780789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.892 [2024-11-26 07:40:48.780808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.892 [2024-11-26 07:40:48.780814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.791704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.791722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.791729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.800784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.800803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.800809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.812338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.812356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.812362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.824049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.824068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.824075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.835266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.835285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.835291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.846314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.846333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.846339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.857214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.857232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.857238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.869223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.869242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.869248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.878171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.878189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.878199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.887111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.887129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.887136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.897605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.897623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.897629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.909328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.909346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.909353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.919284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.919303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.919309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.930542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.930561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.930567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.939636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.939654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.939660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.950501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.950520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.950526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.961427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.961446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.961452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:20.893 [2024-11-26 07:40:48.972820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:20.893 [2024-11-26 07:40:48.972843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:20.893 [2024-11-26 07:40:48.972850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:48.985433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:48.985453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:48.985459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:48.997312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:48.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:48.997337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.009919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.009944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.020971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.020996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.031610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.031628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.031635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.042660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.042679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.042685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.156 3046.00 IOPS, 380.75 MiB/s [2024-11-26T06:40:49.254Z] [2024-11-26 07:40:49.054491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.054510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.054517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.066318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.066337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.066347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.078248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.078266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.078273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.088587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.088605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.088611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.098831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.098849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.098855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.109266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.109284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.120743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.120761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.120767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.132261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.132279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.132286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.142506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.142524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.142531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.154638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.154656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.165856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.156 [2024-11-26 07:40:49.165878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.156 [2024-11-26 07:40:49.165884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.156 [2024-11-26 07:40:49.177030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.177048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.177054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.184785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.184803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.184809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.196356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.196375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.196381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.207597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.207616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.207622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.220019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.220037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.220043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.232643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.232662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.232669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.157 [2024-11-26 07:40:49.245270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.157 [2024-11-26 07:40:49.245289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.157 [2024-11-26 07:40:49.245295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.257923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.257941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.257948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.270720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.270745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.282879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.282898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.282904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.295540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.295560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.295568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.307870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.307889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.307896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.320287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.320306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.320312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.332682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.332700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.332707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.344235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.344254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.344260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.355663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.355681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.367456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.367474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.367484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.379404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.379423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.379429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.391566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.391585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.391591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.402286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.402304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.402310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.413771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.413790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.413796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.424258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.424276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.424282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.435389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.435407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.435414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.445680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.445699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.445705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.455595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.455614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.418 [2024-11-26 07:40:49.455620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.418 [2024-11-26 07:40:49.466211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.418 [2024-11-26 07:40:49.466232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.419 [2024-11-26 07:40:49.466239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.419 [2024-11-26 07:40:49.475781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.419 [2024-11-26 07:40:49.475800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.419 [2024-11-26 07:40:49.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.419 [2024-11-26 07:40:49.483590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.419 [2024-11-26 07:40:49.483608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.419 [2024-11-26 07:40:49.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.419 [2024-11-26 07:40:49.495069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.419 [2024-11-26 07:40:49.495087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.419 [2024-11-26 07:40:49.495093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.419 [2024-11-26 07:40:49.506406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.419 [2024-11-26 07:40:49.506425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.419 [2024-11-26 07:40:49.506431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.516976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.516995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.679 [2024-11-26 07:40:49.517001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.528412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.528430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.679 [2024-11-26 07:40:49.528437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.539106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.539125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.679 [2024-11-26 07:40:49.539131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.549524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.549542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.679 [2024-11-26 07:40:49.549549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.560368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.560387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.679 [2024-11-26 07:40:49.560393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.679 [2024-11-26 07:40:49.570123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.679 [2024-11-26 07:40:49.570141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.578773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.578791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.578797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.586626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.586651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.594962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.594980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.594987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.605226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.605244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.605250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.613059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.613077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.613084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.620002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.620027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.630294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.630312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.630325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.640056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.640074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.640081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.649534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.649553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.649559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.658852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.658870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.658876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.669226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.669244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.669250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.679215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.679233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.679240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.688172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.688190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.688196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.696181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.696199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.696205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.705645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.705663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.715032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.715051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.715057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.724633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.724651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.724657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.732798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.732816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.732822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.742936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.742954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.742960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.753684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.753709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.680 [2024-11-26 07:40:49.765114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.680 [2024-11-26 07:40:49.765132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.680 [2024-11-26 07:40:49.765138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.940 [2024-11-26 07:40:49.774415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.940 [2024-11-26 07:40:49.774433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.940 [2024-11-26 07:40:49.774440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.940 [2024-11-26 07:40:49.785095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.940 [2024-11-26 07:40:49.785114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.940 [2024-11-26 07:40:49.785121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.940 [2024-11-26 07:40:49.796872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.940 [2024-11-26 07:40:49.796890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.940 [2024-11-26 07:40:49.796900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.940 [2024-11-26 07:40:49.808047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.940 [2024-11-26 07:40:49.808066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.940 [2024-11-26 07:40:49.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.940 [2024-11-26 07:40:49.820062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.940 [2024-11-26 07:40:49.820080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.820086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.830202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.830220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.830226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.840705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.840723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.840729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.851119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.851137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.857273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.857291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.857297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.867012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.867031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.867037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.876259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.876277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.876283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.887230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.887251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.887257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.898555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.898574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.898581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.908167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.908185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.908191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.915829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.915846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.915852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.925879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.925897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.925904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.934877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.946983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.947003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.947010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.959772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.959790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.959797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.972282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.972301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.972307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.984555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.984574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.984581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:49.996958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:49.996978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:49.996984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:50.008461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:50.008492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:50.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:50.020835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:50.020872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:50.020884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:21.941 [2024-11-26 07:40:50.032150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:21.941 [2024-11-26 07:40:50.032183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:21.941 [2024-11-26 07:40:50.032194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:22.201 [2024-11-26 07:40:50.044799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20eda10) 00:31:22.201 [2024-11-26 07:40:50.044826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:22.201 [2024-11-26 07:40:50.044837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:22.201 2977.50 IOPS, 372.19 MiB/s 00:31:22.201 Latency(us) 00:31:22.201 [2024-11-26T06:40:50.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.201 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:31:22.201 nvme0n1 : 2.00 2980.96 372.62 0.00 0.00 5365.02 935.25 12943.36 00:31:22.201 [2024-11-26T06:40:50.299Z] =================================================================================================================== 00:31:22.201 [2024-11-26T06:40:50.300Z] Total : 2980.96 372.62 0.00 0.00 5365.02 935.25 12943.36 00:31:22.202 { 00:31:22.202 "results": [ 00:31:22.202 { 00:31:22.202 "job": "nvme0n1", 00:31:22.202 "core_mask": "0x2", 00:31:22.202 "workload": "randread", 00:31:22.202 "status": "finished", 00:31:22.202 "queue_depth": 16, 00:31:22.202 "io_size": 131072, 00:31:22.202 "runtime": 2.003043, 00:31:22.202 "iops": 2980.9644625701994, 00:31:22.202 "mibps": 372.62055782127493, 00:31:22.202 "io_failed": 0, 00:31:22.202 "io_timeout": 0, 00:31:22.202 "avg_latency_us": 5365.021997432033, 00:31:22.202 "min_latency_us": 935.2533333333333, 00:31:22.202 "max_latency_us": 12943.36 00:31:22.202 } 00:31:22.202 ], 00:31:22.202 "core_count": 1 00:31:22.202 } 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:22.202 | .driver_specific 00:31:22.202 | .nvme_error 00:31:22.202 | .status_code 00:31:22.202 | .command_transient_transport_error' 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1635328 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1635328 ']' 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1635328 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.202 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1635328 00:31:22.461 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:22.461 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:22.461 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1635328' 00:31:22.461 killing process with pid 1635328 00:31:22.461 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1635328 00:31:22.461 Received shutdown signal, test time was about 2.000000 seconds 00:31:22.461 00:31:22.461 Latency(us) 00:31:22.461 [2024-11-26T06:40:50.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:22.461 [2024-11-26T06:40:50.560Z] =================================================================================================================== 00:31:22.462 [2024-11-26T06:40:50.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1635328 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1636208 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1636208 /var/tmp/bperf.sock 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1636208 ']' 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:22.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:22.462 07:40:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:22.462 [2024-11-26 07:40:50.490405] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:22.462 [2024-11-26 07:40:50.490463] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636208 ] 00:31:22.722 [2024-11-26 07:40:50.574895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.722 [2024-11-26 07:40:50.604339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.294 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.294 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:23.294 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:23.294 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:23.554 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:23.554 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.554 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:23.554 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.554 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:23.555 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:23.815 nvme0n1 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:23.815 07:40:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:23.815 Running I/O for 2 seconds... 00:31:23.815 [2024-11-26 07:40:51.874327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eea00 00:31:23.815 [2024-11-26 07:40:51.875358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.815 [2024-11-26 07:40:51.875384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:23.815 [2024-11-26 07:40:51.883133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:23.815 [2024-11-26 07:40:51.884118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.815 [2024-11-26 07:40:51.884136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:23.815 [2024-11-26 07:40:51.891661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:23.815 [2024-11-26 07:40:51.892641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.815 [2024-11-26 07:40:51.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:23.815 [2024-11-26 07:40:51.900165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:23.815 [2024-11-26 07:40:51.901146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:23.815 [2024-11-26 07:40:51.901168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.908662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:24.076 [2024-11-26 07:40:51.909618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.076 [2024-11-26 07:40:51.909635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.917127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0ff8 00:31:24.076 [2024-11-26 07:40:51.918112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.076 [2024-11-26 07:40:51.918129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.925590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166edd58 00:31:24.076 [2024-11-26 07:40:51.926568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.076 [2024-11-26 07:40:51.926585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.934122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6300 00:31:24.076 [2024-11-26 07:40:51.935106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.076 [2024-11-26 07:40:51.935122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.942592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f81e0 00:31:24.076 [2024-11-26 07:40:51.943577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.076 [2024-11-26 07:40:51.943594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.076 [2024-11-26 07:40:51.951045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4f40 00:31:24.076 [2024-11-26 07:40:51.951993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.952010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:51.959494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1ca0 00:31:24.077 [2024-11-26 07:40:51.960475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.960495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:51.967944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eea00 00:31:24.077 [2024-11-26 07:40:51.968925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.968941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:51.976398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:24.077 [2024-11-26 07:40:51.977362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.977379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:51.984839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:24.077 [2024-11-26 07:40:51.985820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.985836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:51.993284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:24.077 [2024-11-26 07:40:51.994236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:51.994254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.001717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:24.077 [2024-11-26 07:40:52.002696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.002713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.010181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0ff8 00:31:24.077 [2024-11-26 07:40:52.011150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.011171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.018622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166edd58 00:31:24.077 [2024-11-26 07:40:52.019604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.019620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.027072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6300 00:31:24.077 [2024-11-26 07:40:52.028053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.028069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.035514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f81e0 00:31:24.077 [2024-11-26 07:40:52.036501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.036517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.043955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4f40 00:31:24.077 [2024-11-26 07:40:52.044935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.044951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.052400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1ca0 00:31:24.077 [2024-11-26 07:40:52.053390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.053406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.060872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eea00 00:31:24.077 [2024-11-26 07:40:52.061850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.061867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.069323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:24.077 [2024-11-26 07:40:52.070252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.070269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.077763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:24.077 [2024-11-26 07:40:52.078713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.078729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.086230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:24.077 [2024-11-26 07:40:52.087201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.087217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.094676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:24.077 [2024-11-26 07:40:52.095636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.095653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.103117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0ff8 00:31:24.077 [2024-11-26 07:40:52.104104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.104120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.111576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166edd58 00:31:24.077 [2024-11-26 07:40:52.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.112576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.120030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6300 00:31:24.077 [2024-11-26 07:40:52.121002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.121018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.128470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f81e0 00:31:24.077 [2024-11-26 07:40:52.129452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.129476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.136902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4f40 00:31:24.077 [2024-11-26 07:40:52.137884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.077 [2024-11-26 07:40:52.137900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.077 [2024-11-26 07:40:52.145376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1ca0 00:31:24.078 [2024-11-26 07:40:52.146374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.078 [2024-11-26 07:40:52.146390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.078 [2024-11-26 07:40:52.153832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eea00 00:31:24.078 [2024-11-26 07:40:52.154812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.078 [2024-11-26 07:40:52.154828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.078 [2024-11-26 07:40:52.162309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:24.078 [2024-11-26 07:40:52.163290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.078 [2024-11-26 07:40:52.163307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.339 [2024-11-26 07:40:52.170756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:24.339 [2024-11-26 07:40:52.171742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.339 [2024-11-26 07:40:52.171758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.339 [2024-11-26 07:40:52.179196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:24.339 [2024-11-26 07:40:52.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.339 [2024-11-26 07:40:52.180173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.339 [2024-11-26 07:40:52.188776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:24.339 [2024-11-26 07:40:52.190101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.339 [2024-11-26 07:40:52.190117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:24.339 [2024-11-26 07:40:52.194789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f46d0 00:31:24.339 [2024-11-26 07:40:52.195315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.339 [2024-11-26 07:40:52.195331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:24.339 [2024-11-26 07:40:52.202844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fac10 00:31:24.339 [2024-11-26 07:40:52.203479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.203495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.212300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fda78 00:31:24.340 [2024-11-26 07:40:52.213062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.213079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.220746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:24.340 [2024-11-26 07:40:52.221521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.221537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.229182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fc128 00:31:24.340 [2024-11-26 07:40:52.229965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.229981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.237624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:24.340 [2024-11-26 07:40:52.238390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.238406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.246093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166df988 00:31:24.340 [2024-11-26 07:40:52.246829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.254551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:24.340 [2024-11-26 07:40:52.255358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.255374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.262998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ee5c8 00:31:24.340 [2024-11-26 07:40:52.263786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.263802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.271440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:24.340 [2024-11-26 07:40:52.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.272222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.279888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb048 00:31:24.340 [2024-11-26 07:40:52.280675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.280691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.288351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:24.340 [2024-11-26 07:40:52.289128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.289144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.296803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:24.340 [2024-11-26 07:40:52.297595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.297611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.305262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:24.340 [2024-11-26 07:40:52.306033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.306049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.313691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3a28 00:31:24.340 [2024-11-26 07:40:52.314478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.314495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.322125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:24.340 [2024-11-26 07:40:52.322910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.322927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.330574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e99d8 00:31:24.340 [2024-11-26 07:40:52.331358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.331375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.339034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eaab8 00:31:24.340 [2024-11-26 07:40:52.339827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.339843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.347493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fef90 00:31:24.340 [2024-11-26 07:40:52.348273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.348289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.355935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fdeb0 00:31:24.340 [2024-11-26 07:40:52.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.356739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.364392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fcdd0 00:31:24.340 [2024-11-26 07:40:52.365162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.365178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.372806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ddc00 00:31:24.340 [2024-11-26 07:40:52.373546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.373562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.381274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166dece0 00:31:24.340 [2024-11-26 07:40:52.382009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.382025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.389727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166efae0 00:31:24.340 [2024-11-26 07:40:52.390528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.390545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.398186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:24.340 [2024-11-26 07:40:52.398974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.398994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.406622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f5378 00:31:24.340 [2024-11-26 07:40:52.407372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.407388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.415052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ec840 00:31:24.340 [2024-11-26 07:40:52.415842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.340 [2024-11-26 07:40:52.415858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.340 [2024-11-26 07:40:52.423492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0bc0 00:31:24.340 [2024-11-26 07:40:52.424278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.341 [2024-11-26 07:40:52.424294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.431935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5220 00:31:24.602 [2024-11-26 07:40:52.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.432732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.440389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1ca0 00:31:24.602 [2024-11-26 07:40:52.441138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.441154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.448878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3e60 00:31:24.602 [2024-11-26 07:40:52.449661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.449677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.457314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e84c0 00:31:24.602 [2024-11-26 07:40:52.458104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.458120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.465750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e95a0 00:31:24.602 [2024-11-26 07:40:52.466545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.466561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.474200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ea680 00:31:24.602 [2024-11-26 07:40:52.474983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.474999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.482641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb480 00:31:24.602 [2024-11-26 07:40:52.483434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.483450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.491072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fda78 00:31:24.602 [2024-11-26 07:40:52.491848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.491864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.499494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:24.602 [2024-11-26 07:40:52.500260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.500276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.507912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fc128 00:31:24.602 [2024-11-26 07:40:52.508689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.508705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.516348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:24.602 [2024-11-26 07:40:52.517135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.517151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.524808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166df988 00:31:24.602 [2024-11-26 07:40:52.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.525609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.533263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:24.602 [2024-11-26 07:40:52.534040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.534056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.541704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ee5c8 00:31:24.602 [2024-11-26 07:40:52.542487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.542503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.550141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:24.602 [2024-11-26 07:40:52.550927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.602 [2024-11-26 07:40:52.550943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.602 [2024-11-26 07:40:52.558567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb048 00:31:24.602 [2024-11-26 07:40:52.559331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.567001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:24.603 [2024-11-26 07:40:52.567782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.567798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.575445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:24.603 [2024-11-26 07:40:52.576209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.576227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.583875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:24.603 [2024-11-26 07:40:52.584647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.584664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.592305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3a28 00:31:24.603 [2024-11-26 07:40:52.593087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.593103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.600745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:24.603 [2024-11-26 07:40:52.601519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.601536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.609186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e99d8 00:31:24.603 [2024-11-26 07:40:52.609966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.609982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.617635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eaab8 00:31:24.603 [2024-11-26 07:40:52.618388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.618407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.626074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fef90 00:31:24.603 [2024-11-26 07:40:52.626868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.626884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.634509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fdeb0 00:31:24.603 [2024-11-26 07:40:52.635264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.635281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.642943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fcdd0 00:31:24.603 [2024-11-26 07:40:52.643713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.643730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.651376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ddc00 00:31:24.603 [2024-11-26 07:40:52.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.652172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.659813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166dece0 00:31:24.603 [2024-11-26 07:40:52.660589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.660605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.668258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166efae0 00:31:24.603 [2024-11-26 07:40:52.669037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.669053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.676693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7538 00:31:24.603 [2024-11-26 07:40:52.677472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.677488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.685132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f5378 00:31:24.603 [2024-11-26 07:40:52.685921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.603 [2024-11-26 07:40:52.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.603 [2024-11-26 07:40:52.693557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ec840 00:31:24.865 [2024-11-26 07:40:52.694321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.694340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.702136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0bc0 00:31:24.865 [2024-11-26 07:40:52.702924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.702941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.710588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5220 00:31:24.865 [2024-11-26 07:40:52.711371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.719031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1ca0 00:31:24.865 [2024-11-26 07:40:52.719818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.727457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3e60 00:31:24.865 [2024-11-26 07:40:52.728222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.728238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.735878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e84c0 00:31:24.865 [2024-11-26 07:40:52.736654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.736670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.744335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e95a0 00:31:24.865 [2024-11-26 07:40:52.745121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.745137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.752785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ea680 00:31:24.865 [2024-11-26 07:40:52.753577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.753593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.761271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb480 00:31:24.865 [2024-11-26 07:40:52.762042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.762058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.769696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fda78 00:31:24.865 [2024-11-26 07:40:52.770486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.770502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.778134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:24.865 [2024-11-26 07:40:52.778919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.778935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.786565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fc128 00:31:24.865 [2024-11-26 07:40:52.787349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.787365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.794998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:24.865 [2024-11-26 07:40:52.795740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.795756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.865 [2024-11-26 07:40:52.803611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166df988 00:31:24.865 [2024-11-26 07:40:52.804395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.865 [2024-11-26 07:40:52.804411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.812053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:24.866 [2024-11-26 07:40:52.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.812844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.820491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ee5c8 00:31:24.866 [2024-11-26 07:40:52.821226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.821242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.828983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:24.866 [2024-11-26 07:40:52.829764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.829780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.837431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb048 00:31:24.866 [2024-11-26 07:40:52.838219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.838235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.845879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:24.866 [2024-11-26 07:40:52.846657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.846674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.854318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:24.866 [2024-11-26 07:40:52.855088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.855104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:24.866 29896.00 IOPS, 116.78 MiB/s [2024-11-26T06:40:52.964Z] [2024-11-26 07:40:52.862747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:24.866 [2024-11-26 07:40:52.863520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.863537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.871181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e9168 00:31:24.866 [2024-11-26 07:40:52.871955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.871970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.879610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb480 00:31:24.866 [2024-11-26 07:40:52.880363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.880379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.888065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:24.866 [2024-11-26 07:40:52.888853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.888869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.896550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:24.866 [2024-11-26 07:40:52.897338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.897354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.904988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:24.866 [2024-11-26 07:40:52.905765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.905781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.913423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:24.866 [2024-11-26 07:40:52.914217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.914235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.921840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:24.866 [2024-11-26 07:40:52.922591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.922607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.930276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:24.866 [2024-11-26 07:40:52.931064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.931080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.938736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:24.866 [2024-11-26 07:40:52.939514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.939530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.947200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eaab8 00:31:24.866 [2024-11-26 07:40:52.947987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.948003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:24.866 [2024-11-26 07:40:52.955632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fe720 00:31:24.866 [2024-11-26 07:40:52.956378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:24.866 [2024-11-26 07:40:52.956394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.127 [2024-11-26 07:40:52.964058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fc998 00:31:25.127 [2024-11-26 07:40:52.964848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.127 [2024-11-26 07:40:52.964864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.127 [2024-11-26 07:40:52.972491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166df118 00:31:25.128 [2024-11-26 07:40:52.973283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:52.973299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:52.980946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7100 00:31:25.128 [2024-11-26 07:40:52.981737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:52.981753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:52.989391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ecc78 00:31:25.128 [2024-11-26 07:40:52.990182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:52.990198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:52.997833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e4de8 00:31:25.128 [2024-11-26 07:40:52.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:52.998625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.006258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:25.128 [2024-11-26 07:40:53.007038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.007053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.014676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e9168 00:31:25.128 [2024-11-26 07:40:53.015465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.015481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.023114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb480 00:31:25.128 [2024-11-26 07:40:53.023906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.023922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.031560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:25.128 [2024-11-26 07:40:53.032344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.032360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.040011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:25.128 [2024-11-26 07:40:53.040797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.040813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.048445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:25.128 [2024-11-26 07:40:53.049215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.049231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.056858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:25.128 [2024-11-26 07:40:53.057649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.057666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.065299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:25.128 [2024-11-26 07:40:53.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.066094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.073747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:25.128 [2024-11-26 07:40:53.074520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.074536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.082206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:25.128 [2024-11-26 07:40:53.083002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.083018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.090652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eaab8 00:31:25.128 [2024-11-26 07:40:53.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.091447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.099089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fe720 00:31:25.128 [2024-11-26 07:40:53.099869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.099886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.107527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fc998 00:31:25.128 [2024-11-26 07:40:53.108311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.108327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.115971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166df118 00:31:25.128 [2024-11-26 07:40:53.116756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.124429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7100 00:31:25.128 [2024-11-26 07:40:53.125205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.125221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.132889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ecc78 00:31:25.128 [2024-11-26 07:40:53.133683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.133702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.141332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e4de8 00:31:25.128 [2024-11-26 07:40:53.142116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.128 [2024-11-26 07:40:53.142132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.128 [2024-11-26 07:40:53.149761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4298 00:31:25.129 [2024-11-26 07:40:53.150547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.150563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.158193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e9168 00:31:25.129 [2024-11-26 07:40:53.158979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.158995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.166648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fb480 00:31:25.129 [2024-11-26 07:40:53.167420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.167436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.175087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fd208 00:31:25.129 [2024-11-26 07:40:53.175878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.175894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.183526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166de8a8 00:31:25.129 [2024-11-26 07:40:53.184321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.184337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.191957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6890 00:31:25.129 [2024-11-26 07:40:53.192734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.192751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.200391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f57b0 00:31:25.129 [2024-11-26 07:40:53.201134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.201150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.208831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f0788 00:31:25.129 [2024-11-26 07:40:53.209506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.209523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:25.129 [2024-11-26 07:40:53.217496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e27f0 00:31:25.129 [2024-11-26 07:40:53.218017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.129 [2024-11-26 07:40:53.218033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.390 [2024-11-26 07:40:53.226100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2d80 00:31:25.390 [2024-11-26 07:40:53.226972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.390 [2024-11-26 07:40:53.226987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.390 [2024-11-26 07:40:53.234533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:25.390 [2024-11-26 07:40:53.235391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.390 [2024-11-26 07:40:53.235407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.390 [2024-11-26 07:40:53.242989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f8618 00:31:25.390 [2024-11-26 07:40:53.243871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.390 [2024-11-26 07:40:53.243886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.390 [2024-11-26 07:40:53.251432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e99d8 00:31:25.390 [2024-11-26 07:40:53.252259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.390 [2024-11-26 07:40:53.252274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.390 [2024-11-26 07:40:53.259897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:25.391 [2024-11-26 07:40:53.260758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.260774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.268355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3a28 00:31:25.391 [2024-11-26 07:40:53.269226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.269244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.276839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:25.391 [2024-11-26 07:40:53.277705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.277721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.285267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:25.391 [2024-11-26 07:40:53.286126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.286142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.293694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e2c28 00:31:25.391 [2024-11-26 07:40:53.294571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.294588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.302138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e3d08 00:31:25.391 [2024-11-26 07:40:53.303021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.303037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.310603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f96f8 00:31:25.391 [2024-11-26 07:40:53.311476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.311492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.319035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eee38 00:31:25.391 [2024-11-26 07:40:53.319902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.319918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.327466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5ec8 00:31:25.391 [2024-11-26 07:40:53.328358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.328373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.335892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:25.391 [2024-11-26 07:40:53.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.336785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.344356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6020 00:31:25.391 [2024-11-26 07:40:53.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.345232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.352800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ed920 00:31:25.391 [2024-11-26 07:40:53.353683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.353702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.361262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5a90 00:31:25.391 [2024-11-26 07:40:53.362138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.362153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.369706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f9b30 00:31:25.391 [2024-11-26 07:40:53.370587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.370602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.378142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1868 00:31:25.391 [2024-11-26 07:40:53.379021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.379036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.386579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f8a50 00:31:25.391 [2024-11-26 07:40:53.387433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.387450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.395036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ddc00 00:31:25.391 [2024-11-26 07:40:53.395914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.395930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.403490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fcdd0 00:31:25.391 [2024-11-26 07:40:53.404321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.404338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.411936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fdeb0 00:31:25.391 [2024-11-26 07:40:53.412800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.412815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.420368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fef90 00:31:25.391 [2024-11-26 07:40:53.421236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.421251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.428797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ea680 00:31:25.391 [2024-11-26 07:40:53.429643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.429659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.437255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e38d0 00:31:25.391 [2024-11-26 07:40:53.438118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.391 [2024-11-26 07:40:53.438133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.391 [2024-11-26 07:40:53.445712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e49b0 00:31:25.391 [2024-11-26 07:40:53.446572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.392 [2024-11-26 07:40:53.446588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.392 [2024-11-26 07:40:53.454154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ef270 00:31:25.392 [2024-11-26 07:40:53.455022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.392 [2024-11-26 07:40:53.455038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.392 [2024-11-26 07:40:53.462596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ff3c8 00:31:25.392 [2024-11-26 07:40:53.463431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.392 [2024-11-26 07:40:53.463448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.392 [2024-11-26 07:40:53.471106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4b08 00:31:25.392 [2024-11-26 07:40:53.471986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.392 [2024-11-26 07:40:53.472002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.392 [2024-11-26 07:40:53.479547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ec408 00:31:25.392 [2024-11-26 07:40:53.480413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.392 [2024-11-26 07:40:53.480428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.487988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ed4e8 00:31:25.652 [2024-11-26 07:40:53.488830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.488846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.496442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7da8 00:31:25.652 [2024-11-26 07:40:53.497274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.497290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.504890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2d80 00:31:25.652 [2024-11-26 07:40:53.505757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.505772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.513338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:25.652 [2024-11-26 07:40:53.514200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.514216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.521767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f8618 00:31:25.652 [2024-11-26 07:40:53.522647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.522663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.530219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e99d8 00:31:25.652 [2024-11-26 07:40:53.531074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.531090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.538707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:25.652 [2024-11-26 07:40:53.539592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.539607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.547172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3a28 00:31:25.652 [2024-11-26 07:40:53.548047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.548063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.555606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:25.652 [2024-11-26 07:40:53.556464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.556480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.564040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:25.652 [2024-11-26 07:40:53.564908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.564924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.572481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e2c28 00:31:25.652 [2024-11-26 07:40:53.573361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.573379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.580930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e3d08 00:31:25.652 [2024-11-26 07:40:53.581811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.581827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.589376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f96f8 00:31:25.652 [2024-11-26 07:40:53.590222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.590238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.597807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eee38 00:31:25.652 [2024-11-26 07:40:53.598672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.598688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.606236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5ec8 00:31:25.652 [2024-11-26 07:40:53.607064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.652 [2024-11-26 07:40:53.607080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.652 [2024-11-26 07:40:53.614661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166eb760 00:31:25.652 [2024-11-26 07:40:53.615524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.615540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.623106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f6020 00:31:25.653 [2024-11-26 07:40:53.623975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.623991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.631564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ed920 00:31:25.653 [2024-11-26 07:40:53.632411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.632427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.640017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e5a90 00:31:25.653 [2024-11-26 07:40:53.640892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.640908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.648467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f9b30 00:31:25.653 [2024-11-26 07:40:53.649360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.649377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.656902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f1868 00:31:25.653 [2024-11-26 07:40:53.657795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.657811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.665367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f8a50 00:31:25.653 [2024-11-26 07:40:53.666234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.666249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.673821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ddc00 00:31:25.653 [2024-11-26 07:40:53.674701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.674717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.682284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fcdd0 00:31:25.653 [2024-11-26 07:40:53.683143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.683162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.690737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fdeb0 00:31:25.653 [2024-11-26 07:40:53.691608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.691624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.699176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fef90 00:31:25.653 [2024-11-26 07:40:53.700029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.700045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.707605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ea680 00:31:25.653 [2024-11-26 07:40:53.708469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.708485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.716055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e38d0 00:31:25.653 [2024-11-26 07:40:53.716923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.716939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.724506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e49b0 00:31:25.653 [2024-11-26 07:40:53.725354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.725369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.732953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ef270 00:31:25.653 [2024-11-26 07:40:53.733826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.733841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.653 [2024-11-26 07:40:53.741395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ff3c8 00:31:25.653 [2024-11-26 07:40:53.742234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.653 [2024-11-26 07:40:53.742251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.749833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f4b08 00:31:25.913 [2024-11-26 07:40:53.750702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.750719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.758280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ec408 00:31:25.913 [2024-11-26 07:40:53.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.759175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.766732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166ed4e8 00:31:25.913 [2024-11-26 07:40:53.767601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.775187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f7da8 00:31:25.913 [2024-11-26 07:40:53.776066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.776081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.783647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2d80 00:31:25.913 [2024-11-26 07:40:53.784524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.784540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.792073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166fa7d8 00:31:25.913 [2024-11-26 07:40:53.792954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.792973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.800667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f8618 00:31:25.913 [2024-11-26 07:40:53.801491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.801507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.809120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e99d8 00:31:25.913 [2024-11-26 07:40:53.810010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.810025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.817587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e88f8 00:31:25.913 [2024-11-26 07:40:53.818430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.818446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.826031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f3a28 00:31:25.913 [2024-11-26 07:40:53.826852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.826868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.834536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e6738 00:31:25.913 [2024-11-26 07:40:53.835418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.913 [2024-11-26 07:40:53.835434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.913 [2024-11-26 07:40:53.842975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166f2948 00:31:25.914 [2024-11-26 07:40:53.843813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.914 [2024-11-26 07:40:53.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.914 [2024-11-26 07:40:53.851418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e2c28 00:31:25.914 [2024-11-26 07:40:53.852294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.914 [2024-11-26 07:40:53.852310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.914 [2024-11-26 07:40:53.859890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455520) with pdu=0x2000166e3d08 00:31:25.914 [2024-11-26 07:40:53.861628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:25.914 [2024-11-26 07:40:53.861645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:25.914 30064.50 IOPS, 117.44 MiB/s 00:31:25.914 Latency(us) 00:31:25.914 [2024-11-26T06:40:54.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.914 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.914 nvme0n1 : 2.00 30082.17 117.51 0.00 0.00 4250.98 2280.11 16165.55 00:31:25.914 [2024-11-26T06:40:54.012Z] =================================================================================================================== 00:31:25.914 [2024-11-26T06:40:54.012Z] Total : 30082.17 117.51 0.00 0.00 4250.98 2280.11 16165.55 00:31:25.914 { 00:31:25.914 "results": [ 00:31:25.914 { 00:31:25.914 "job": "nvme0n1", 00:31:25.914 "core_mask": "0x2", 00:31:25.914 "workload": "randwrite", 00:31:25.914 "status": "finished", 00:31:25.914 "queue_depth": 128, 00:31:25.914 "io_size": 4096, 00:31:25.914 "runtime": 2.00308, 00:31:25.914 "iops": 30082.173452882562, 00:31:25.914 "mibps": 117.50849005032251, 00:31:25.914 "io_failed": 0, 00:31:25.914 "io_timeout": 0, 00:31:25.914 "avg_latency_us": 4250.9750878182895, 00:31:25.914 "min_latency_us": 2280.1066666666666, 00:31:25.914 "max_latency_us": 16165.546666666667 00:31:25.914 } 00:31:25.914 ], 00:31:25.914 "core_count": 1 00:31:25.914 } 00:31:25.914 07:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:25.914 07:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:25.914 07:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:25.914 | .driver_specific 00:31:25.914 | .nvme_error 00:31:25.914 | .status_code 00:31:25.914 | .command_transient_transport_error' 00:31:25.914 07:40:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:26.174 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:31:26.174 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1636208 00:31:26.174 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1636208 ']' 00:31:26.174 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1636208 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1636208 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1636208' 00:31:26.175 killing process with pid 1636208 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1636208 00:31:26.175 Received shutdown signal, test time was about 2.000000 seconds 00:31:26.175 00:31:26.175 Latency(us) 00:31:26.175 [2024-11-26T06:40:54.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.175 [2024-11-26T06:40:54.273Z] =================================================================================================================== 00:31:26.175 [2024-11-26T06:40:54.273Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1636208 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1636973 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1636973 /var/tmp/bperf.sock 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1636973 ']' 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:26.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.175 07:40:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:26.435 [2024-11-26 07:40:54.289692] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:26.435 [2024-11-26 07:40:54.289752] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636973 ] 00:31:26.435 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:26.435 Zero copy mechanism will not be used. 00:31:26.435 [2024-11-26 07:40:54.372596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.435 [2024-11-26 07:40:54.402503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.004 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.004 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:27.004 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:27.004 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:27.264 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:27.524 nvme0n1 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:27.524 07:40:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:27.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:27.785 Zero copy mechanism will not be used. 00:31:27.785 Running I/O for 2 seconds... 00:31:27.785 [2024-11-26 07:40:55.679833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.785 [2024-11-26 07:40:55.680099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.785 [2024-11-26 07:40:55.680124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.785 [2024-11-26 07:40:55.687915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.785 [2024-11-26 07:40:55.688112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.785 [2024-11-26 07:40:55.688131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.785 [2024-11-26 07:40:55.694514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.785 [2024-11-26 07:40:55.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.785 [2024-11-26 07:40:55.694860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.702130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.702284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.702302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.707259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.707554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.707572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.712689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.712882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.712898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.716122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.716322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.716338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.719872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.720062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.720078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.723641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.723829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.723845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.729669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.729863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.729879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.735936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.739562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.739754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.739770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.742985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.743177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.743194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.746738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.746928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.746944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.752569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.752876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.752893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.757204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.757395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.757411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.760717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.760917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.760939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.765779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.765966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.765982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.770001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.770044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.770059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.775178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.775234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.775249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.779075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.779271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.779287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.782572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.782761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.782777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.786282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.786472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.786488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.789918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.790119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.790135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.793841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.794032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.794048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.797618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.797808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.797824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.801942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.802133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.802150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.805627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.786 [2024-11-26 07:40:55.805818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.786 [2024-11-26 07:40:55.805834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.786 [2024-11-26 07:40:55.808967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.809156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.809178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.812344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.812533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.812549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.817629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.817967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.817984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.822896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.823096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.823112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.829359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.829702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.835567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.835879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.835897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.840011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.840205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.840221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.844070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.844276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.844293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.847801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.848003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.848019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.851697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.851886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.851902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.859439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.859642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.859658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.865558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.865736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.865752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.868843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.869020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.869035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.872337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.872517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.872533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:27.787 [2024-11-26 07:40:55.875848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:27.787 [2024-11-26 07:40:55.876026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.787 [2024-11-26 07:40:55.876045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.879228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.879408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.879424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.882793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.882971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.882987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.889142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.889324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.889341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.892531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.892708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.892723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.895791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.895967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.895984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.899235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.899430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.902456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.902632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.902648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.905638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.905816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.905831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.911578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.911757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.917205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.917385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.920561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.920739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.920755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.926384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.926561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.926577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.933572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.933854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.933870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.937869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.938047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.938063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.941373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.941567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.944666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.944842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.944858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.948033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.948215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.948231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.951918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.952093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.952109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.955278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.955452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.955468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.958711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.958886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.958902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.966255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.966585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.966601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.972436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.049 [2024-11-26 07:40:55.972603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.049 [2024-11-26 07:40:55.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.049 [2024-11-26 07:40:55.976036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.976209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.976225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:55.980029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.980199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.980215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:55.984180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.984348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.984364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:55.988627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.988794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:55.994142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.994314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.994330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:55.998728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:55.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:55.998911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.003050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.003222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.003238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.009287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.009558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.009574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.017448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.017711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.017727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.022706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.022873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.029189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.029488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.029505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.033610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.033777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.033793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.041589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.041761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.041777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.047108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.047280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.047297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.050871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.051039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.051054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.055404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.055574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.055590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.059318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.059486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.059501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.063023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.063211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.063227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.067214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.067380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.067396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.071535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.071702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.071718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.077456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.077667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.077683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.085466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.085656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.085671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.089789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.089966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.089982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.094055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.094228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.094244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.098112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.098282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.098299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.103375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.103563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.103580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.050 [2024-11-26 07:40:56.112969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.050 [2024-11-26 07:40:56.113024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.050 [2024-11-26 07:40:56.113039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.051 [2024-11-26 07:40:56.120359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.051 [2024-11-26 07:40:56.120526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.051 [2024-11-26 07:40:56.120541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.051 [2024-11-26 07:40:56.130210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.051 [2024-11-26 07:40:56.130310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.051 [2024-11-26 07:40:56.130325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.312 [2024-11-26 07:40:56.140896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.312 [2024-11-26 07:40:56.141177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.312 [2024-11-26 07:40:56.141197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.312 [2024-11-26 07:40:56.151913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.312 [2024-11-26 07:40:56.152213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.312 [2024-11-26 07:40:56.152229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.312 [2024-11-26 07:40:56.163130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.312 [2024-11-26 07:40:56.163457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.312 [2024-11-26 07:40:56.163472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.312 [2024-11-26 07:40:56.174095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.312 [2024-11-26 07:40:56.174156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.312 [2024-11-26 07:40:56.174175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.312 [2024-11-26 07:40:56.184731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.185016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.185031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.192200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.192363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.192378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.201933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.202174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.202189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.213482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.213773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.213789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.224892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.225070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.225085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.236025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.236318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.236334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.246246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.246478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.246494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.257694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.257987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.258003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.268518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.268786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.268801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.279524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.279798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.279814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.290607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.290835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.290850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.302097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.302347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.302362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.313091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.313383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.313399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.324593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.324915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.324930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.335450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.335529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.335544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.346373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.346624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.346639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.357120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.357427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.357443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.368432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.368677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.368691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.379658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.379726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.379740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.388626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.388690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.388705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.397524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.397799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.397815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.313 [2024-11-26 07:40:56.403033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.313 [2024-11-26 07:40:56.403322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.313 [2024-11-26 07:40:56.403338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.409837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.409896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.409914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.418629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.418917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.426858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.426934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.426950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.434793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.434850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.434866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.438787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.438830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.438845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.442972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.443031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.443046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.450667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.450987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.460351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.460611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.470281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.470571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.470587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.481104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.481387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.481401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.491463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.491582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.491597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.499621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.499718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.499734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.508338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.508523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.508538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.516000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.516103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.516119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.523197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.523306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.523321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.530469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.530716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.530731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.540575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.540855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.540872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.550587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.550874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.550890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.561375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.561629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.561645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.576 [2024-11-26 07:40:56.572056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.576 [2024-11-26 07:40:56.572342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.576 [2024-11-26 07:40:56.572359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.576876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.576952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.576967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.579821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.579880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.579895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.582882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.582949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.582964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.585682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.585736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.588424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.588474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.588489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.591177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.591243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.591258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.594611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.594656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.594673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.597744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.597798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.597813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.600514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.600569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.600584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.605883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.606080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.606095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.610414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.610611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.615317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.615379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.615394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.619809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.619860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.619875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.622690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.622759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.622774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.625712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.625761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.625776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.628737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.628808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.628823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.632149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.632220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.632234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.634754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.634812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.634827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.638330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.638377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.638392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.641069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.641124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.641139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.646120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.646170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.646185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.649275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.649371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.649385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.652472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.652526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.652541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.655933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.655978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.655993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.658944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.659003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.659018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.661486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.661541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.661556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.577 [2024-11-26 07:40:56.664005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.577 [2024-11-26 07:40:56.664060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.577 [2024-11-26 07:40:56.664075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.578 [2024-11-26 07:40:56.666649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.578 [2024-11-26 07:40:56.666706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.840 [2024-11-26 07:40:56.666722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.840 [2024-11-26 07:40:56.669140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.840 [2024-11-26 07:40:56.669206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.669221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.671591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.672803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.672820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 5238.00 IOPS, 654.75 MiB/s [2024-11-26T06:40:56.939Z] [2024-11-26 07:40:56.675219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.675266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.675281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.678013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.678092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.678107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.680496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.680558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.680573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.682955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.683007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.685429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.685510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.687863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.687924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.687939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.690514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.690567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.690582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.695766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.695819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.695834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.698779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.698829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.698844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.703299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.703363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.703378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.706997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.707202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.707217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.715511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.715679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.715695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.722365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.722429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.722445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.725636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.725696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.725711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.728676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.728732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.728747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.732433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.732503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.732518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.737087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.737141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.737156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.740623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.740820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.740835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.747277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.747329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.747345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.751528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.751583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.751604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.754991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.755047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.762334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.762551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.762566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.769513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.769757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.769772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.775941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.776205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.841 [2024-11-26 07:40:56.776220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.841 [2024-11-26 07:40:56.783000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.841 [2024-11-26 07:40:56.783156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.783175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.786746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.790147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.790207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.790222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.795112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.795209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.795223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.803397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.803675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.803690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.812371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.812427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.812443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.820732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.820791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.820806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.826115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.826163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.826178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.833151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.833221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.833236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.837397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.837469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.842193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.842252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.842267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.850807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.851131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.858610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.858677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.858692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.862989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.863078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.863093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.868525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.868592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.877671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.877973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.877989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.884974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.885031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.885046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.892751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.892824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.892839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.898828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.899115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.899131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.907246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.907337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.907352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.910225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.910270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.910284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.913119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.913179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.913197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.916723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.916824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.916839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.919655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.919722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.919738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.922594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.922646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.922662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.925465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.925517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.925532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:28.842 [2024-11-26 07:40:56.928237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:28.842 [2024-11-26 07:40:56.928284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:28.842 [2024-11-26 07:40:56.928299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.931483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.931542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.931557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.934253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.934309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.934325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.936745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.936808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.936823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.940066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.940178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.940193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.942902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.942958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.942973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.945506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.945564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.945579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.947979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.948027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.948042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.950427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.950493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.950507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.952863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.952914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.952929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.955323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.955377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.955392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.957759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.957810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.957825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.960227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.960278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.960293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.962656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.962716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.962731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.965080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.104 [2024-11-26 07:40:56.965131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.104 [2024-11-26 07:40:56.965146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.104 [2024-11-26 07:40:56.967513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.967578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.967593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.969950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.969996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.970011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.972387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.972448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.972463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.974822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.974866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.974881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.977239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.977302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.977317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.979653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.979705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.982066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.982109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.982127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.984698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.984748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.984763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.989635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.989877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.989892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:56.996664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:56.996882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:56.996898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.004696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.004972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.004988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.011819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.012060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.012075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.018897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.019154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.019175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.024530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.024634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.024650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.028513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.028663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.028679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.035814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.036071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.036091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.045846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.046172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.046189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.055650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.055765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.055781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.060550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.060641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.065388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.065495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.072348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.072446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.072461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.075271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.075373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.075388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.077975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.078082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.078097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.080563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.080651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.080666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.083146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.083256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.085725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.085812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.085827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.088318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.105 [2024-11-26 07:40:57.088397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.105 [2024-11-26 07:40:57.088412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.105 [2024-11-26 07:40:57.090823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.090916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.090931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.093543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.093639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.093655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.096504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.096613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.096628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.099537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.099629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.099645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.103377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.103621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.103636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.108649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.108864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.108882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.115592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.115676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.115691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.120904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.121221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.121238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.128384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.128673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.128696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.135721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.136027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.144635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.144971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.144987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.151001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.151243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.151259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.160624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.160945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.160961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.170623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.170864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.170880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.181117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.181336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.181352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.106 [2024-11-26 07:40:57.191571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.106 [2024-11-26 07:40:57.191839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.106 [2024-11-26 07:40:57.191855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.201835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.202143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.202164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.211797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.211998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.212013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.221829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.221942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.221957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.233049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.233226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.243366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.243629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.243644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.253508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.253837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.253853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.263743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.264024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.264041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.272833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.272942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.272957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.282861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.283104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.283120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.292537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.292798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.292814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.303760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.367 [2024-11-26 07:40:57.304033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.367 [2024-11-26 07:40:57.304049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.367 [2024-11-26 07:40:57.313857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.314126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.314141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.323644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.323919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.323935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.334459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.334726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.334741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.344015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.344281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.344296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.353658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.353902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.353921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.363606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.363826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.363840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.373553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.373874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.373890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.383494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.383748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.383764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.392677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.392760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.392775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.400401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.400680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.400696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.409373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.409651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.417118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.417232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.417247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.425786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.426016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.426031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.435720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.435944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.435961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.446040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.446289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.446304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.368 [2024-11-26 07:40:57.456019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.368 [2024-11-26 07:40:57.456282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.368 [2024-11-26 07:40:57.456298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.466468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.466713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.466728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.476825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.477025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.477040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.487058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.487366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.487382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.496445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.496726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.496742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.505818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.505962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.515800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.516185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.516201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.523710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.523796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.523812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.532155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.532420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.532435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.539797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.539892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.539907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.545831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.545916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.545931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.553152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.553441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.553458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.560732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.560817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.560833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.565129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.565274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.565289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.570737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.570984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.570999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.579290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.579554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.579573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.586225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.586302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.586317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.593178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.593477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.593493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.603335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.603602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.628 [2024-11-26 07:40:57.603618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.628 [2024-11-26 07:40:57.613948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.628 [2024-11-26 07:40:57.614131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.614147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.629 [2024-11-26 07:40:57.624272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.624559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.624575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.629 [2024-11-26 07:40:57.635150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.635368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.635383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.629 [2024-11-26 07:40:57.645167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.645434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.645449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:29.629 [2024-11-26 07:40:57.655515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.655769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.655785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:29.629 [2024-11-26 07:40:57.665975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.666125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.666143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:29.629 5096.50 IOPS, 637.06 MiB/s [2024-11-26T06:40:57.727Z] [2024-11-26 07:40:57.676290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2455860) with pdu=0x2000166ff3c8 00:31:29.629 [2024-11-26 07:40:57.676588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:29.629 [2024-11-26 07:40:57.676604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:29.629 00:31:29.629 Latency(us) 00:31:29.629 [2024-11-26T06:40:57.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.629 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:29.629 nvme0n1 : 2.01 5089.13 636.14 0.00 0.00 3137.71 1099.09 15182.51 00:31:29.629 [2024-11-26T06:40:57.727Z] =================================================================================================================== 00:31:29.629 [2024-11-26T06:40:57.727Z] Total : 5089.13 636.14 0.00 0.00 3137.71 1099.09 15182.51 00:31:29.629 { 00:31:29.629 "results": [ 00:31:29.629 { 00:31:29.629 "job": "nvme0n1", 00:31:29.629 "core_mask": "0x2", 00:31:29.629 "workload": "randwrite", 00:31:29.629 "status": "finished", 00:31:29.629 "queue_depth": 16, 00:31:29.629 "io_size": 131072, 00:31:29.629 "runtime": 2.006631, 00:31:29.629 "iops": 5089.126999433379, 00:31:29.629 "mibps": 636.1408749291724, 00:31:29.629 "io_failed": 0, 00:31:29.629 "io_timeout": 0, 00:31:29.629 "avg_latency_us": 3137.7070009139575, 00:31:29.629 "min_latency_us": 1099.0933333333332, 00:31:29.629 "max_latency_us": 15182.506666666666 00:31:29.629 } 00:31:29.629 ], 00:31:29.629 "core_count": 1 00:31:29.629 } 00:31:29.629 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:29.629 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:29.629 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:29.629 | .driver_specific 00:31:29.629 | .nvme_error 00:31:29.629 | .status_code 00:31:29.629 | .command_transient_transport_error' 00:31:29.629 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 330 > 0 )) 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1636973 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1636973 ']' 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1636973 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1636973 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1636973' 00:31:29.888 killing process with pid 1636973 00:31:29.888 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1636973 00:31:29.888 Received shutdown signal, test time was about 2.000000 seconds 00:31:29.888 00:31:29.888 Latency(us) 00:31:29.888 [2024-11-26T06:40:57.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.888 [2024-11-26T06:40:57.987Z] =================================================================================================================== 00:31:29.889 [2024-11-26T06:40:57.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.889 07:40:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1636973 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1634476 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1634476 ']' 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1634476 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1634476 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1634476' 00:31:30.148 killing process with pid 1634476 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1634476 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1634476 00:31:30.148 00:31:30.148 real 0m16.461s 00:31:30.148 user 0m32.697s 00:31:30.148 sys 0m3.532s 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.148 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:30.148 ************************************ 00:31:30.148 END TEST nvmf_digest_error 00:31:30.148 ************************************ 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.408 rmmod nvme_tcp 00:31:30.408 rmmod nvme_fabrics 00:31:30.408 rmmod nvme_keyring 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1634476 ']' 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1634476 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1634476 ']' 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1634476 00:31:30.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1634476) - No such process 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1634476 is not found' 00:31:30.408 Process with pid 1634476 is not found 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.408 07:40:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:32.957 00:31:32.957 real 0m43.512s 00:31:32.957 user 1m8.219s 00:31:32.957 sys 0m13.310s 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:32.957 ************************************ 00:31:32.957 END TEST nvmf_digest 00:31:32.957 ************************************ 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:32.957 07:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.958 ************************************ 00:31:32.958 START TEST nvmf_bdevperf 00:31:32.958 ************************************ 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:32.958 * Looking for test storage... 00:31:32.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.958 --rc genhtml_branch_coverage=1 00:31:32.958 --rc genhtml_function_coverage=1 00:31:32.958 --rc genhtml_legend=1 00:31:32.958 --rc geninfo_all_blocks=1 00:31:32.958 --rc geninfo_unexecuted_blocks=1 00:31:32.958 00:31:32.958 ' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.958 --rc genhtml_branch_coverage=1 00:31:32.958 --rc genhtml_function_coverage=1 00:31:32.958 --rc genhtml_legend=1 00:31:32.958 --rc geninfo_all_blocks=1 00:31:32.958 --rc geninfo_unexecuted_blocks=1 00:31:32.958 00:31:32.958 ' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.958 --rc genhtml_branch_coverage=1 00:31:32.958 --rc genhtml_function_coverage=1 00:31:32.958 --rc genhtml_legend=1 00:31:32.958 --rc geninfo_all_blocks=1 00:31:32.958 --rc geninfo_unexecuted_blocks=1 00:31:32.958 00:31:32.958 ' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:32.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:32.958 --rc genhtml_branch_coverage=1 00:31:32.958 --rc genhtml_function_coverage=1 00:31:32.958 --rc genhtml_legend=1 00:31:32.958 --rc geninfo_all_blocks=1 00:31:32.958 --rc geninfo_unexecuted_blocks=1 00:31:32.958 00:31:32.958 ' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.958 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:32.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.959 07:41:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:41.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:41.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:41.105 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:41.105 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:41.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.106 07:41:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:41.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:31:41.106 00:31:41.106 --- 10.0.0.2 ping statistics --- 00:31:41.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.106 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:31:41.106 00:31:41.106 --- 10.0.0.1 ping statistics --- 00:31:41.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.106 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1641912 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1641912 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1641912 ']' 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.106 07:41:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.106 [2024-11-26 07:41:08.349738] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:41.106 [2024-11-26 07:41:08.349804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.106 [2024-11-26 07:41:08.453049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:41.106 [2024-11-26 07:41:08.505688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:41.106 [2024-11-26 07:41:08.505748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:41.106 [2024-11-26 07:41:08.505756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:41.106 [2024-11-26 07:41:08.505763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:41.106 [2024-11-26 07:41:08.505770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:41.106 [2024-11-26 07:41:08.507587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:41.106 [2024-11-26 07:41:08.507747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.106 [2024-11-26 07:41:08.507748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.106 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.106 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:41.106 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.106 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.106 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.368 [2024-11-26 07:41:09.230409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.368 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.369 Malloc0 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:41.369 [2024-11-26 07:41:09.301667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:41.369 { 00:31:41.369 "params": { 00:31:41.369 "name": "Nvme$subsystem", 00:31:41.369 "trtype": "$TEST_TRANSPORT", 00:31:41.369 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.369 "adrfam": "ipv4", 00:31:41.369 "trsvcid": "$NVMF_PORT", 00:31:41.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.369 "hdgst": ${hdgst:-false}, 00:31:41.369 "ddgst": ${ddgst:-false} 00:31:41.369 }, 00:31:41.369 "method": "bdev_nvme_attach_controller" 00:31:41.369 } 00:31:41.369 EOF 00:31:41.369 )") 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:41.369 07:41:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:41.369 "params": { 00:31:41.369 "name": "Nvme1", 00:31:41.369 "trtype": "tcp", 00:31:41.369 "traddr": "10.0.0.2", 00:31:41.369 "adrfam": "ipv4", 00:31:41.369 "trsvcid": "4420", 00:31:41.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.369 "hdgst": false, 00:31:41.369 "ddgst": false 00:31:41.369 }, 00:31:41.369 "method": "bdev_nvme_attach_controller" 00:31:41.369 }' 00:31:41.369 [2024-11-26 07:41:09.361724] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:41.369 [2024-11-26 07:41:09.361790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642026 ] 00:31:41.369 [2024-11-26 07:41:09.453997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.631 [2024-11-26 07:41:09.506830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.631 Running I/O for 1 seconds... 00:31:43.017 8555.00 IOPS, 33.42 MiB/s 00:31:43.017 Latency(us) 00:31:43.017 [2024-11-26T06:41:11.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.017 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:43.017 Verification LBA range: start 0x0 length 0x4000 00:31:43.017 Nvme1n1 : 1.01 8608.32 33.63 0.00 0.00 14808.41 2908.16 13489.49 00:31:43.017 [2024-11-26T06:41:11.115Z] =================================================================================================================== 00:31:43.017 [2024-11-26T06:41:11.115Z] Total : 8608.32 33.63 0.00 0.00 14808.41 2908.16 13489.49 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1642366 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:43.018 { 00:31:43.018 "params": { 00:31:43.018 "name": "Nvme$subsystem", 00:31:43.018 "trtype": "$TEST_TRANSPORT", 00:31:43.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:43.018 "adrfam": "ipv4", 00:31:43.018 "trsvcid": "$NVMF_PORT", 00:31:43.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:43.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:43.018 "hdgst": ${hdgst:-false}, 00:31:43.018 "ddgst": ${ddgst:-false} 00:31:43.018 }, 00:31:43.018 "method": "bdev_nvme_attach_controller" 00:31:43.018 } 00:31:43.018 EOF 00:31:43.018 )") 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:43.018 07:41:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:43.018 "params": { 00:31:43.018 "name": "Nvme1", 00:31:43.018 "trtype": "tcp", 00:31:43.018 "traddr": "10.0.0.2", 00:31:43.018 "adrfam": "ipv4", 00:31:43.018 "trsvcid": "4420", 00:31:43.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.018 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:43.018 "hdgst": false, 00:31:43.018 "ddgst": false 00:31:43.018 }, 00:31:43.018 "method": "bdev_nvme_attach_controller" 00:31:43.018 }' 00:31:43.018 [2024-11-26 07:41:10.936550] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:43.018 [2024-11-26 07:41:10.936623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642366 ] 00:31:43.018 [2024-11-26 07:41:11.031117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.018 [2024-11-26 07:41:11.082344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.278 Running I/O for 15 seconds... 00:31:45.602 10035.00 IOPS, 39.20 MiB/s [2024-11-26T06:41:13.964Z] 11118.50 IOPS, 43.43 MiB/s [2024-11-26T06:41:13.964Z] 07:41:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1641912 00:31:45.866 07:41:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:45.866 [2024-11-26 07:41:13.889137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.867 [2024-11-26 07:41:13.889917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.867 [2024-11-26 07:41:13.889927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.889934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.889944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.889951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.889960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.889968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.889978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.889986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.889995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.868 [2024-11-26 07:41:13.890682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.868 [2024-11-26 07:41:13.890690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.890988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.890995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.869 [2024-11-26 07:41:13.891366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.869 [2024-11-26 07:41:13.891373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:45.870 [2024-11-26 07:41:13.891526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.891534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805450 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.891549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:45.870 [2024-11-26 07:41:13.891555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:45.870 [2024-11-26 07:41:13.891563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103144 len:8 PRP1 0x0 PRP2 0x0 00:31:45.870 [2024-11-26 07:41:13.891573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:45.870 [2024-11-26 07:41:13.895221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.870 [2024-11-26 07:41:13.895276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.896031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.870 [2024-11-26 07:41:13.896049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:45.870 [2024-11-26 07:41:13.896059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.896281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.896499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.870 [2024-11-26 07:41:13.896508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.870 [2024-11-26 07:41:13.896517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.870 [2024-11-26 07:41:13.896526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.870 [2024-11-26 07:41:13.909303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.870 [2024-11-26 07:41:13.909837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.870 [2024-11-26 07:41:13.909877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:45.870 [2024-11-26 07:41:13.909889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.910127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.910359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.870 [2024-11-26 07:41:13.910371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.870 [2024-11-26 07:41:13.910380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.870 [2024-11-26 07:41:13.910388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.870 [2024-11-26 07:41:13.923152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.870 [2024-11-26 07:41:13.923670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.870 [2024-11-26 07:41:13.923711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:45.870 [2024-11-26 07:41:13.923728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.923968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.924204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.870 [2024-11-26 07:41:13.924216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.870 [2024-11-26 07:41:13.924225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.870 [2024-11-26 07:41:13.924233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.870 [2024-11-26 07:41:13.936999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.870 [2024-11-26 07:41:13.937665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.870 [2024-11-26 07:41:13.937707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:45.870 [2024-11-26 07:41:13.937718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.937956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.938185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.870 [2024-11-26 07:41:13.938196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.870 [2024-11-26 07:41:13.938205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.870 [2024-11-26 07:41:13.938213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:45.870 [2024-11-26 07:41:13.950773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:45.870 [2024-11-26 07:41:13.951477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:45.870 [2024-11-26 07:41:13.951521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:45.870 [2024-11-26 07:41:13.951532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:45.870 [2024-11-26 07:41:13.951770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:45.870 [2024-11-26 07:41:13.951991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:45.870 [2024-11-26 07:41:13.952001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:45.870 [2024-11-26 07:41:13.952009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:45.870 [2024-11-26 07:41:13.952017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.132 [2024-11-26 07:41:13.964579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.132 [2024-11-26 07:41:13.965256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.132 [2024-11-26 07:41:13.965301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.132 [2024-11-26 07:41:13.965312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.132 [2024-11-26 07:41:13.965553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.132 [2024-11-26 07:41:13.965780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.132 [2024-11-26 07:41:13.965792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.132 [2024-11-26 07:41:13.965800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.132 [2024-11-26 07:41:13.965808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.132 [2024-11-26 07:41:13.978383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.132 [2024-11-26 07:41:13.979011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.132 [2024-11-26 07:41:13.979057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.132 [2024-11-26 07:41:13.979068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.132 [2024-11-26 07:41:13.979319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.132 [2024-11-26 07:41:13.979542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.132 [2024-11-26 07:41:13.979553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.132 [2024-11-26 07:41:13.979562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.132 [2024-11-26 07:41:13.979570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.132 [2024-11-26 07:41:13.992320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:13.992912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:13.992936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:13.992945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:13.993168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:13.993387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:13.993399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:13.993407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:13.993415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.006170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.006796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.006848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.006860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.007103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.007337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.007350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.007364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.007373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.019951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.020606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.020663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.020676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.020924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.021148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.021174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.021185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.021194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.033767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.034491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.034552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.034565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.034815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.035040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.035051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.035060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.035070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.047681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.048290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.048354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.048368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.048622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.048847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.048860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.048869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.048878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.061482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.062115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.062145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.062155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.062386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.062605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.062618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.062626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.062635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.075234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.075908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.075972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.075986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.076254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.076480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.076493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.076502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.076512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.089080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.089694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.089726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.089735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.089954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.090224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.090240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.090249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.090258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.103037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.103753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.103818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.103838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.104090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.104330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.104343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.104352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.133 [2024-11-26 07:41:14.104361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.133 [2024-11-26 07:41:14.116940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.133 [2024-11-26 07:41:14.117653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.133 [2024-11-26 07:41:14.117717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.133 [2024-11-26 07:41:14.117730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.133 [2024-11-26 07:41:14.117982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.133 [2024-11-26 07:41:14.118221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.133 [2024-11-26 07:41:14.118235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.133 [2024-11-26 07:41:14.118245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.118254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.130826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.131563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.131628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.131641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.131894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.132119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.132131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.132140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.132149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.144601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.145293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.145359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.145371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.145624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.145857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.145871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.145880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.145890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.158495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.159115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.159192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.159207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.159459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.159685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.159697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.159707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.159718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.172316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.172905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.172936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.172946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.173176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.173400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.173412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.173420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.173429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.186207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.186875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.186939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.186951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.187222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.187448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.187460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.187477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.187486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.200070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.200781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.200846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.200859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.201112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.201351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.201365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.201374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.201383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.134 [2024-11-26 07:41:14.213954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.134 [2024-11-26 07:41:14.214675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.134 [2024-11-26 07:41:14.214740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.134 [2024-11-26 07:41:14.214753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.134 [2024-11-26 07:41:14.215006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.134 [2024-11-26 07:41:14.215246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.134 [2024-11-26 07:41:14.215260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.134 [2024-11-26 07:41:14.215269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.134 [2024-11-26 07:41:14.215278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.397 [2024-11-26 07:41:14.227870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.397 [2024-11-26 07:41:14.228547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.397 [2024-11-26 07:41:14.228611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.397 [2024-11-26 07:41:14.228624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.397 [2024-11-26 07:41:14.228877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.397 [2024-11-26 07:41:14.229103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.397 [2024-11-26 07:41:14.229115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.397 [2024-11-26 07:41:14.229125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.397 [2024-11-26 07:41:14.229135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.397 [2024-11-26 07:41:14.241727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.397 [2024-11-26 07:41:14.242361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.397 [2024-11-26 07:41:14.242393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.397 [2024-11-26 07:41:14.242403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.397 [2024-11-26 07:41:14.242623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.397 [2024-11-26 07:41:14.242842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.397 [2024-11-26 07:41:14.242856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.397 [2024-11-26 07:41:14.242864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.397 [2024-11-26 07:41:14.242873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.397 [2024-11-26 07:41:14.255665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.397 [2024-11-26 07:41:14.256410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.397 [2024-11-26 07:41:14.256475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.397 [2024-11-26 07:41:14.256487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.397 [2024-11-26 07:41:14.256740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.397 [2024-11-26 07:41:14.256966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.397 [2024-11-26 07:41:14.256978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.397 [2024-11-26 07:41:14.256988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.397 [2024-11-26 07:41:14.256997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 9716.67 IOPS, 37.96 MiB/s [2024-11-26T06:41:14.496Z] [2024-11-26 07:41:14.270460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.271200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.271266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.271279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.271532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.271757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.271770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.271779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.271790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.284379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.285105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.285200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.285454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.285679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.285690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.285699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.285709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.298135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.298880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.298944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.298957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.299225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.299453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.299465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.299474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.299484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.312068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.312785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.312849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.312862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.313115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.313355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.313369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.313378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.313387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.325965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.326701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.326765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.326778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.327031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.327285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.327299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.327308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.327318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.339891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.340580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.340645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.340658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.340912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.341137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.341149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.341174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.341185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.353787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.354530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.354595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.354608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.354861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.355087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.355099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.355108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.355117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.367723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.368486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.368552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.368565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.368818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.369043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.369056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.369072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.369082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.381484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.382226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.382291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.382303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.382556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.382781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.398 [2024-11-26 07:41:14.382793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.398 [2024-11-26 07:41:14.382803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.398 [2024-11-26 07:41:14.382812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.398 [2024-11-26 07:41:14.395409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.398 [2024-11-26 07:41:14.396129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.398 [2024-11-26 07:41:14.396204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.398 [2024-11-26 07:41:14.396218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.398 [2024-11-26 07:41:14.396470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.398 [2024-11-26 07:41:14.396695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.396708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.396718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.396729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.409325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.410034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.410100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.410115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.410383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.410609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.410622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.410631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.410640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.423218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.423902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.423967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.423980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.424247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.424474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.424486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.424495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.424505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.437087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.437843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.437908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.437920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.438188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.438414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.438428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.438437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.438446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.450837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.451537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.451601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.451614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.451867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.452092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.452104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.452114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.452123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.464596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.465270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.465335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.465356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.465610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.465836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.465849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.465858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.465868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.399 [2024-11-26 07:41:14.478481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.399 [2024-11-26 07:41:14.479195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.399 [2024-11-26 07:41:14.479261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.399 [2024-11-26 07:41:14.479273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.399 [2024-11-26 07:41:14.479527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.399 [2024-11-26 07:41:14.479752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.399 [2024-11-26 07:41:14.479765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.399 [2024-11-26 07:41:14.479774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.399 [2024-11-26 07:41:14.479783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.492382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.493014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.493044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.493054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.493285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.493507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.493518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.493527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.493536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.506369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.507072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.507136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.507149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.507418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.507652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.507664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.507673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.507682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.520264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.520946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.521010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.521023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.521289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.521517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.521529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.521538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.521548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.534125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.534817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.534881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.534894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.535147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.535387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.535400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.535409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.535419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.548016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.548702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.548766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.548779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.549032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.549271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.549284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.549300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.549310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.561892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.562486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.562518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.562528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.562749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.562969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.562981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.562990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.662 [2024-11-26 07:41:14.563000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.662 [2024-11-26 07:41:14.575793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.662 [2024-11-26 07:41:14.576495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.662 [2024-11-26 07:41:14.576559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.662 [2024-11-26 07:41:14.576572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.662 [2024-11-26 07:41:14.576824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.662 [2024-11-26 07:41:14.577050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.662 [2024-11-26 07:41:14.577061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.662 [2024-11-26 07:41:14.577071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.577080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.589672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.590271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.590336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.590349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.590604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.590830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.590842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.590851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.590860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.603458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.604098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.604128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.604138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.604368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.604589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.604601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.604609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.604617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.617381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.618062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.618128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.618140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.618411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.618638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.618650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.618659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.618669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.631246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.631968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.632032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.632045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.632312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.632539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.632552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.632561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.632571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.645155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.645866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.645929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.645949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.646215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.646444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.646458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.646467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.646478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.659122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.659826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.659892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.659905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.660171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.660397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.660412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.660423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.660435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.673028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.673748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.673813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.673826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.674079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.674321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.674335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.674344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.674353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.686926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.687646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.687711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.687723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.687977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.688222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.688235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.688244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.688253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.700820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.701448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.701480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.701489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.701710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.663 [2024-11-26 07:41:14.701930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.663 [2024-11-26 07:41:14.701942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.663 [2024-11-26 07:41:14.701950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.663 [2024-11-26 07:41:14.701958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.663 [2024-11-26 07:41:14.714789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.663 [2024-11-26 07:41:14.715396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.663 [2024-11-26 07:41:14.715422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.663 [2024-11-26 07:41:14.715431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.663 [2024-11-26 07:41:14.715650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.664 [2024-11-26 07:41:14.715870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.664 [2024-11-26 07:41:14.715882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.664 [2024-11-26 07:41:14.715891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.664 [2024-11-26 07:41:14.715898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.664 [2024-11-26 07:41:14.728672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.664 [2024-11-26 07:41:14.729446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.664 [2024-11-26 07:41:14.729511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.664 [2024-11-26 07:41:14.729524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.664 [2024-11-26 07:41:14.729777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.664 [2024-11-26 07:41:14.730002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.664 [2024-11-26 07:41:14.730014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.664 [2024-11-26 07:41:14.730030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.664 [2024-11-26 07:41:14.730040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.664 [2024-11-26 07:41:14.742430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.664 [2024-11-26 07:41:14.743126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.664 [2024-11-26 07:41:14.743200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.664 [2024-11-26 07:41:14.743214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.664 [2024-11-26 07:41:14.743467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.664 [2024-11-26 07:41:14.743692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.664 [2024-11-26 07:41:14.743704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.664 [2024-11-26 07:41:14.743713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.664 [2024-11-26 07:41:14.743723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.926 [2024-11-26 07:41:14.756346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.926 [2024-11-26 07:41:14.757061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.926 [2024-11-26 07:41:14.757126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.926 [2024-11-26 07:41:14.757139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.926 [2024-11-26 07:41:14.757406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.926 [2024-11-26 07:41:14.757634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.926 [2024-11-26 07:41:14.757645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.926 [2024-11-26 07:41:14.757654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.926 [2024-11-26 07:41:14.757665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.926 [2024-11-26 07:41:14.770255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.926 [2024-11-26 07:41:14.770975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.926 [2024-11-26 07:41:14.771038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.926 [2024-11-26 07:41:14.771051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.926 [2024-11-26 07:41:14.771320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.926 [2024-11-26 07:41:14.771561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.926 [2024-11-26 07:41:14.771578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.926 [2024-11-26 07:41:14.771587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.926 [2024-11-26 07:41:14.771596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.926 [2024-11-26 07:41:14.784188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.926 [2024-11-26 07:41:14.784878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.926 [2024-11-26 07:41:14.784942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.784954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.785219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.785446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.785459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.785468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.785478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.798074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.798958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.799024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.799037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.799305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.799532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.799544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.799553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.799563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.811934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.812666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.812731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.812744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.812998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.813240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.813253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.813262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.813271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.825839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.826552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.826632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.826882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.827107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.827118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.827127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.827136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.839744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.840471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.840548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.840800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.841025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.841038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.841047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.841057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.853682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.854404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.854468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.854481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.854734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.854961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.854974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.854984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.854994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.867602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.868268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.868316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.868326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.868564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.868795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.927 [2024-11-26 07:41:14.868807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.927 [2024-11-26 07:41:14.868815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.927 [2024-11-26 07:41:14.868824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.927 [2024-11-26 07:41:14.881439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.927 [2024-11-26 07:41:14.882105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.927 [2024-11-26 07:41:14.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.927 [2024-11-26 07:41:14.882200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.927 [2024-11-26 07:41:14.882453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.927 [2024-11-26 07:41:14.882680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.882692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.882701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.882710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.895298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.895932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.895963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.895973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.896202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.896427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.896438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.896446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.896454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.907977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.908376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.908401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.908408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.908560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.908711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.908721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.908736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.908744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.920618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.921145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.921176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.921184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.921337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.921489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.921499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.921505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.921511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.933266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.933844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.933894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.933904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.934083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.934249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.934259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.934266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.934273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.945902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.946378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.946425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.946435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.946613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.946768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.946777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.946784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.946794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.958564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.959144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.959197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.959207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.959379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.959534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.959543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.959549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.959556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.971152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.971637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.971659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.971665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.971815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.971967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.971975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.971980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.971986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.983865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.928 [2024-11-26 07:41:14.984839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.928 [2024-11-26 07:41:14.984865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.928 [2024-11-26 07:41:14.984872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.928 [2024-11-26 07:41:14.985033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.928 [2024-11-26 07:41:14.985192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.928 [2024-11-26 07:41:14.985200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.928 [2024-11-26 07:41:14.985205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.928 [2024-11-26 07:41:14.985211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.928 [2024-11-26 07:41:14.996507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.929 [2024-11-26 07:41:14.997013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.929 [2024-11-26 07:41:14.997028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.929 [2024-11-26 07:41:14.997038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.929 [2024-11-26 07:41:14.997192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.929 [2024-11-26 07:41:14.997343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.929 [2024-11-26 07:41:14.997350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.929 [2024-11-26 07:41:14.997356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.929 [2024-11-26 07:41:14.997361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:46.929 [2024-11-26 07:41:15.009087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:46.929 [2024-11-26 07:41:15.009583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:46.929 [2024-11-26 07:41:15.009598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:46.929 [2024-11-26 07:41:15.009605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:46.929 [2024-11-26 07:41:15.009754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:46.929 [2024-11-26 07:41:15.009905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:46.929 [2024-11-26 07:41:15.009912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:46.929 [2024-11-26 07:41:15.009918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:46.929 [2024-11-26 07:41:15.009924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.021766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.022370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.022405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.022414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.022581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.022734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.022741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.022747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.022753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.034470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.035075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.035109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.035118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.035293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.035450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.035458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.035465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.035472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.047174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.047753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.047786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.047795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.047961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.048114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.048121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.048128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.048135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.059849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.060466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.060498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.060507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.060672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.060824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.060831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.060837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.060843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.072552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.073043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.073074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.073083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.073255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.073408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.073415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.073425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.073431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.085130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.085719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.085751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.192 [2024-11-26 07:41:15.085760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.192 [2024-11-26 07:41:15.085925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.192 [2024-11-26 07:41:15.086077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.192 [2024-11-26 07:41:15.086084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.192 [2024-11-26 07:41:15.086090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.192 [2024-11-26 07:41:15.086097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.192 [2024-11-26 07:41:15.097798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.192 [2024-11-26 07:41:15.098302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.192 [2024-11-26 07:41:15.098334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.098343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.098510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.098662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.098670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.098676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.098682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.110387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.110854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.110870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.110876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.111024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.111178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.111186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.111192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.111197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.123037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.123702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.123733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.123742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.123908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.124060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.124068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.124074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.124080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.135668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.136285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.136325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.136493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.136645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.136652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.136658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.136663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.148369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.148844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.148860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.148866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.149015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.149171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.149179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.149184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.149190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.161030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.161687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.161719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.161731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.161896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.162049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.162057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.162064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.162071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.173644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.174230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.174398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.174550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.174558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.174564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.174569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.186275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.186607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.186624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.186629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.186779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.186928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.186934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.186940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.186945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.198924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.199520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.199551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.199560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.199725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.199881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.199888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.199893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.199899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.211604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.212084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.193 [2024-11-26 07:41:15.212100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.193 [2024-11-26 07:41:15.212106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.193 [2024-11-26 07:41:15.212259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.193 [2024-11-26 07:41:15.212409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.193 [2024-11-26 07:41:15.212416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.193 [2024-11-26 07:41:15.212421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.193 [2024-11-26 07:41:15.212426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.193 [2024-11-26 07:41:15.224258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.193 [2024-11-26 07:41:15.224739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.194 [2024-11-26 07:41:15.224753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.194 [2024-11-26 07:41:15.224758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.194 [2024-11-26 07:41:15.224906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.194 [2024-11-26 07:41:15.225055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.194 [2024-11-26 07:41:15.225062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.194 [2024-11-26 07:41:15.225067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.194 [2024-11-26 07:41:15.225072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.194 [2024-11-26 07:41:15.236905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.194 [2024-11-26 07:41:15.237431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.194 [2024-11-26 07:41:15.237462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.194 [2024-11-26 07:41:15.237471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.194 [2024-11-26 07:41:15.237636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.194 [2024-11-26 07:41:15.237788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.194 [2024-11-26 07:41:15.237795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.194 [2024-11-26 07:41:15.237804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.194 [2024-11-26 07:41:15.237810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.194 [2024-11-26 07:41:15.249532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.194 [2024-11-26 07:41:15.250149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.194 [2024-11-26 07:41:15.250186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.194 [2024-11-26 07:41:15.250195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.194 [2024-11-26 07:41:15.250360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.194 [2024-11-26 07:41:15.250513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.194 [2024-11-26 07:41:15.250520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.194 [2024-11-26 07:41:15.250526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.194 [2024-11-26 07:41:15.250532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.194 [2024-11-26 07:41:15.262241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.194 [2024-11-26 07:41:15.262823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.194 [2024-11-26 07:41:15.262855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.194 [2024-11-26 07:41:15.262863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.194 [2024-11-26 07:41:15.263028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.194 [2024-11-26 07:41:15.263186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.194 [2024-11-26 07:41:15.263194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.194 [2024-11-26 07:41:15.263200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.194 [2024-11-26 07:41:15.263206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.194 7287.50 IOPS, 28.47 MiB/s [2024-11-26T06:41:15.292Z] [2024-11-26 07:41:15.274919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.194 [2024-11-26 07:41:15.275518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.194 [2024-11-26 07:41:15.275550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.194 [2024-11-26 07:41:15.275558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.194 [2024-11-26 07:41:15.275723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.194 [2024-11-26 07:41:15.275875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.194 [2024-11-26 07:41:15.275882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.194 [2024-11-26 07:41:15.275888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.194 [2024-11-26 07:41:15.275893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.457 [2024-11-26 07:41:15.287612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.457 [2024-11-26 07:41:15.288081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.457 [2024-11-26 07:41:15.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.457 [2024-11-26 07:41:15.288102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.457 [2024-11-26 07:41:15.288256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.457 [2024-11-26 07:41:15.288405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.457 [2024-11-26 07:41:15.288412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.457 [2024-11-26 07:41:15.288417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.457 [2024-11-26 07:41:15.288422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.457 [2024-11-26 07:41:15.300264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.457 [2024-11-26 07:41:15.300722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.457 [2024-11-26 07:41:15.300736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.457 [2024-11-26 07:41:15.300741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.300890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.301039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.301046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.301052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.301057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.312896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.313361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.313375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.313381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.313529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.313678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.313684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.313689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.313694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.325521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.326005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.326018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.326027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.326180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.326330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.326337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.326341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.326347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.338203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.338815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.338846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.338855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.339020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.339177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.339186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.339192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.339197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.350898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.351367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.351397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.351406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.351573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.351725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.351732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.351738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.351744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.363598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.364097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.364112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.364118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.364271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.364425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.364431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.364437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.364442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.376287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.376736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.376750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.376756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.376905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.377053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.377060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.377066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.377071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.388912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.389503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.389535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.389543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.389708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.389859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.389867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.389874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.389880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.401578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.402070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.402085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.402092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.402245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.402395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.402402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.402411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.402416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.458 [2024-11-26 07:41:15.414257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.458 [2024-11-26 07:41:15.414701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.458 [2024-11-26 07:41:15.414715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.458 [2024-11-26 07:41:15.414720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.458 [2024-11-26 07:41:15.414869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.458 [2024-11-26 07:41:15.415017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.458 [2024-11-26 07:41:15.415024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.458 [2024-11-26 07:41:15.415029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.458 [2024-11-26 07:41:15.415033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.426872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.427245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.427259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.427264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.427413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.427562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.427568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.427573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.427578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.439557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.439849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.439864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.439870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.440019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.440172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.440179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.440184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.440189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.452178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.452667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.452681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.452687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.452835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.452984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.452991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.452996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.453000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.464835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.465367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.465398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.465406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.465571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.465723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.465731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.465737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.465743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.477462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.477828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.477843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.477849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.477998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.478147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.478154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.478164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.478169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.490033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.490521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.490536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.490545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.490694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.490844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.490850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.490855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.490860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.502697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.503273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.503304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.503313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.503479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.503632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.503639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.503645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.503651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.515363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.515913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.515945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.515953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.516119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.516277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.516286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.516291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.516297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.527998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.528568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.528599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.528608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.528772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.528927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.528935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.528941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.459 [2024-11-26 07:41:15.528948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.459 [2024-11-26 07:41:15.540662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.459 [2024-11-26 07:41:15.541299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.459 [2024-11-26 07:41:15.541332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.459 [2024-11-26 07:41:15.541341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.459 [2024-11-26 07:41:15.541506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.459 [2024-11-26 07:41:15.541657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.459 [2024-11-26 07:41:15.541664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.459 [2024-11-26 07:41:15.541670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.460 [2024-11-26 07:41:15.541677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.722 [2024-11-26 07:41:15.553279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.722 [2024-11-26 07:41:15.553786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.722 [2024-11-26 07:41:15.553802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.722 [2024-11-26 07:41:15.553808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.722 [2024-11-26 07:41:15.553957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.722 [2024-11-26 07:41:15.554106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.722 [2024-11-26 07:41:15.554113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.722 [2024-11-26 07:41:15.554119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.722 [2024-11-26 07:41:15.554124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.722 [2024-11-26 07:41:15.565963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.722 [2024-11-26 07:41:15.566526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.722 [2024-11-26 07:41:15.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.722 [2024-11-26 07:41:15.566566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.722 [2024-11-26 07:41:15.566731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.722 [2024-11-26 07:41:15.566883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.722 [2024-11-26 07:41:15.566891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.722 [2024-11-26 07:41:15.566905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.722 [2024-11-26 07:41:15.566911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.722 [2024-11-26 07:41:15.578640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.722 [2024-11-26 07:41:15.579112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.722 [2024-11-26 07:41:15.579127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.722 [2024-11-26 07:41:15.579133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.722 [2024-11-26 07:41:15.579289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.722 [2024-11-26 07:41:15.579439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.579445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.579450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.579455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.591296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.591856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.591887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.591895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.592060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.592222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.592230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.592236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.592242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.603944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.604547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.604578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.604587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.604754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.604906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.604914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.604921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.604927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.616636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.617195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.617227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.617235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.617400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.617552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.617558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.617564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.617570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.629277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.629861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.629870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.630035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.630195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.630204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.630210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.630216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.641910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.642490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.642521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.642530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.642694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.642846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.642854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.642860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.642866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.654580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.654920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.654937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.654946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.655096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.655252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.655260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.655265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.655270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.667264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.667853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.667894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.668058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.668219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.668227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.668233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.668239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.679946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.680518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.680550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.680558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.680722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.680874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.680882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.680887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.680893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.692603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.693209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.693241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.693249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.693414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.693569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.723 [2024-11-26 07:41:15.693577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.723 [2024-11-26 07:41:15.693583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.723 [2024-11-26 07:41:15.693588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.723 [2024-11-26 07:41:15.705297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.723 [2024-11-26 07:41:15.705895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.723 [2024-11-26 07:41:15.705926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.723 [2024-11-26 07:41:15.705935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.723 [2024-11-26 07:41:15.706099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.723 [2024-11-26 07:41:15.706259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.706268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.706274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.706280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.717970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.718524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.718556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.718565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.718729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.718881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.718888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.718894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.718900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.730606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.731102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.731118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.731124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.731278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.731429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.731435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.731444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.731449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.743283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.743871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.743902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.743911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.744076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.744236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.744245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.744251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.744256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.755853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.756457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.756488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.756497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.756662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.756814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.756821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.756827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.756833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.768545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.769024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.769053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.769062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.769235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.769387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.769395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.769400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.769406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.781118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.781711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.781743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.781752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.781916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.782068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.782075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.782081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.782087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.793796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.794148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.794168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.794174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.794323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.794472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.794479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.794485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.794490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.724 [2024-11-26 07:41:15.806377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.724 [2024-11-26 07:41:15.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.724 [2024-11-26 07:41:15.806896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.724 [2024-11-26 07:41:15.806903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.724 [2024-11-26 07:41:15.807053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.724 [2024-11-26 07:41:15.807207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.724 [2024-11-26 07:41:15.807214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.724 [2024-11-26 07:41:15.807219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.724 [2024-11-26 07:41:15.807224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.988 [2024-11-26 07:41:15.819070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.988 [2024-11-26 07:41:15.819546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.988 [2024-11-26 07:41:15.819560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.988 [2024-11-26 07:41:15.819570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.988 [2024-11-26 07:41:15.819718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.988 [2024-11-26 07:41:15.819867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.988 [2024-11-26 07:41:15.819874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.988 [2024-11-26 07:41:15.819879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.988 [2024-11-26 07:41:15.819884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.988 [2024-11-26 07:41:15.831728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.988 [2024-11-26 07:41:15.832176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.988 [2024-11-26 07:41:15.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.988 [2024-11-26 07:41:15.832197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.988 [2024-11-26 07:41:15.832347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.832496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.832503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.832508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.832513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.844347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.844932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.844972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.845136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.845297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.845305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.845311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.845317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.857025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.857632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.857664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.857673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.857837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.857994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.858001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.858007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.858013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.869719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.870175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.870192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.870198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.870347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.870497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.870503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.870510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.870515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.882364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.882937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.882968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.882977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.883142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.883302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.883311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.883317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.883323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.895023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.895609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.895640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.895649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.895814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.895966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.895974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.895984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.895991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.907706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.908301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.908332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.908341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.908506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.908658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.908666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.908672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.908679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.920402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.920968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.921000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.921009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.921184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.921338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.921345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.921351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.921357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.933062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.933668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.933699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.933708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.933872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.934025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.934032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.934038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.934043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.989 [2024-11-26 07:41:15.945746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.989 [2024-11-26 07:41:15.946246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.989 [2024-11-26 07:41:15.946262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.989 [2024-11-26 07:41:15.946267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.989 [2024-11-26 07:41:15.946417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.989 [2024-11-26 07:41:15.946566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.989 [2024-11-26 07:41:15.946573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.989 [2024-11-26 07:41:15.946579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.989 [2024-11-26 07:41:15.946583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:15.958512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:15.959011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:15.959027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:15.959032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:15.959187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:15.959337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:15.959344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:15.959349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:15.959354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:15.971218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:15.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:15.971703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:15.971712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:15.971877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:15.972029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:15.972036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:15.972041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:15.972047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:15.983907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:15.984489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:15.984520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:15.984532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:15.984697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:15.984849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:15.984856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:15.984862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:15.984867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:15.996568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:15.997140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:15.997178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:15.997186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:15.997350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:15.997502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:15.997510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:15.997516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:15.997522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.009219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.009817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.009848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.009857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.010022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.010183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.010192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.010198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.010204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.021905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.022470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.022501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.022510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.022675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.022831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.022838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.022844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.022850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.034548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.035130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.035168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.035176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.035341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.035492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.035500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.035506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.035511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.047209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.047790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.047822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.047831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.047995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.048147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.048155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.048170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.048176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.059871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.060465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.060496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.060504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.060669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.060821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.060829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.060839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.060845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:47.990 [2024-11-26 07:41:16.072555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:47.990 [2024-11-26 07:41:16.073127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:47.990 [2024-11-26 07:41:16.073166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:47.990 [2024-11-26 07:41:16.073174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:47.990 [2024-11-26 07:41:16.073339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:47.990 [2024-11-26 07:41:16.073491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:47.990 [2024-11-26 07:41:16.073499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:47.990 [2024-11-26 07:41:16.073505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:47.990 [2024-11-26 07:41:16.073512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.253 [2024-11-26 07:41:16.085235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.253 [2024-11-26 07:41:16.085794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.253 [2024-11-26 07:41:16.085826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.253 [2024-11-26 07:41:16.085834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.253 [2024-11-26 07:41:16.085999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.253 [2024-11-26 07:41:16.086151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.253 [2024-11-26 07:41:16.086167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.253 [2024-11-26 07:41:16.086174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.253 [2024-11-26 07:41:16.086179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.253 [2024-11-26 07:41:16.097878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.253 [2024-11-26 07:41:16.098448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.253 [2024-11-26 07:41:16.098479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.253 [2024-11-26 07:41:16.098489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.253 [2024-11-26 07:41:16.098656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.253 [2024-11-26 07:41:16.098808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.253 [2024-11-26 07:41:16.098815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.253 [2024-11-26 07:41:16.098821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.253 [2024-11-26 07:41:16.098827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.253 [2024-11-26 07:41:16.110531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.253 [2024-11-26 07:41:16.111119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.253 [2024-11-26 07:41:16.111150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.253 [2024-11-26 07:41:16.111166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.253 [2024-11-26 07:41:16.111332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.253 [2024-11-26 07:41:16.111484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.253 [2024-11-26 07:41:16.111491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.253 [2024-11-26 07:41:16.111497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.253 [2024-11-26 07:41:16.111503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.253 [2024-11-26 07:41:16.123223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.253 [2024-11-26 07:41:16.123721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.253 [2024-11-26 07:41:16.123737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.253 [2024-11-26 07:41:16.123742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.253 [2024-11-26 07:41:16.123891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.253 [2024-11-26 07:41:16.124041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.124047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.124053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.124058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.135897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.136437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.136468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.136476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.136641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.136793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.136800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.136806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.136812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.148522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.149123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.149155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.149175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.149339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.149492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.149499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.149505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.149511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.161225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.161687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.161703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.161709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.161858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.162008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.162014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.162020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.162025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.173907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.174370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.174386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.174392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.174542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.174692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.174699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.174704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.174709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.186566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.187053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.187067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.187072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.187226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.187383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.187390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.187396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.187400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.199252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.199835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.199867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.199876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.200041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.200201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.200209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.200215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.200221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.211924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.212525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.212557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.212565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.212730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.212882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.212889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.212895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.212901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.224621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.225216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.225248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.225257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.225421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.225574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.225581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.225590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.225596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.237308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.237850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.237881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.237890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.238055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.238214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.254 [2024-11-26 07:41:16.238222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.254 [2024-11-26 07:41:16.238228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.254 [2024-11-26 07:41:16.238234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.254 [2024-11-26 07:41:16.249963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.254 [2024-11-26 07:41:16.250518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.254 [2024-11-26 07:41:16.250549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.254 [2024-11-26 07:41:16.250558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.254 [2024-11-26 07:41:16.250722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.254 [2024-11-26 07:41:16.250874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.250882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.250889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.250895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.262605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.263190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.263222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.263231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.263395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.263547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.263555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.263561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.263567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 5830.00 IOPS, 22.77 MiB/s [2024-11-26T06:41:16.353Z] [2024-11-26 07:41:16.275293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.275905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.275914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.276079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.276240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.276248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.276254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.276260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.287964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.288517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.288549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.288557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.288722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.288874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.288881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.288887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.288893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.300599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.301174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.301205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.301214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.301378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.301530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.301538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.301543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.301549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.313261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.313854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.313888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.313897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.314062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.314222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.314230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.314236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.314242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.325941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.326500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.326531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.326540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.326704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.326856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.326864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.326869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.326875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.255 [2024-11-26 07:41:16.338574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.255 [2024-11-26 07:41:16.339123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.255 [2024-11-26 07:41:16.339154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.255 [2024-11-26 07:41:16.339170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.255 [2024-11-26 07:41:16.339335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.255 [2024-11-26 07:41:16.339487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.255 [2024-11-26 07:41:16.339494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.255 [2024-11-26 07:41:16.339500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.255 [2024-11-26 07:41:16.339506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.517 [2024-11-26 07:41:16.351216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.517 [2024-11-26 07:41:16.351812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.517 [2024-11-26 07:41:16.351844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.517 [2024-11-26 07:41:16.351853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.517 [2024-11-26 07:41:16.352017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.517 [2024-11-26 07:41:16.352183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.517 [2024-11-26 07:41:16.352191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.517 [2024-11-26 07:41:16.352197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.517 [2024-11-26 07:41:16.352203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.517 [2024-11-26 07:41:16.363905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.517 [2024-11-26 07:41:16.364387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.517 [2024-11-26 07:41:16.364418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.517 [2024-11-26 07:41:16.364427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.517 [2024-11-26 07:41:16.364591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.517 [2024-11-26 07:41:16.364743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.517 [2024-11-26 07:41:16.364751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.517 [2024-11-26 07:41:16.364757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.517 [2024-11-26 07:41:16.364762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.517 [2024-11-26 07:41:16.376614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.517 [2024-11-26 07:41:16.377126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.517 [2024-11-26 07:41:16.377141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.517 [2024-11-26 07:41:16.377147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.517 [2024-11-26 07:41:16.377322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.517 [2024-11-26 07:41:16.377474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.517 [2024-11-26 07:41:16.377480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.517 [2024-11-26 07:41:16.377486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.517 [2024-11-26 07:41:16.377491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.517 [2024-11-26 07:41:16.389185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.517 [2024-11-26 07:41:16.389753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.517 [2024-11-26 07:41:16.389784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.517 [2024-11-26 07:41:16.389793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.517 [2024-11-26 07:41:16.389958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.517 [2024-11-26 07:41:16.390109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.517 [2024-11-26 07:41:16.390117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.517 [2024-11-26 07:41:16.390126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.517 [2024-11-26 07:41:16.390132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.517 [2024-11-26 07:41:16.401835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.517 [2024-11-26 07:41:16.402389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.517 [2024-11-26 07:41:16.402420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.517 [2024-11-26 07:41:16.402429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.517 [2024-11-26 07:41:16.402593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.402745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.402753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.402758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.402764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.414482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.415081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.415113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.415121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.415295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.415448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.415455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.415461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.415467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.427178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.427677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.427693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.427699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.427847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.427997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.428004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.428010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.428015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.439868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.440367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.440382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.440387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.440536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.440685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.440692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.440698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.440703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.452447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.453038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.453069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.453078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.453252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.453405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.453412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.453418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.453424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.465115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.465674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.465682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.465847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.465999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.466006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.466012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.466018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.477728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.478198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.478220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.478231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.478386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.478536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.478543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.478548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.478554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.490392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.490981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.491012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.491021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.491194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.491346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.491355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.491361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.491367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.503063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.503560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.503576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.503582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.503730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.503880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.503886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.503892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.503897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.515761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.516357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.516388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.516397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.516562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.518 [2024-11-26 07:41:16.516718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.518 [2024-11-26 07:41:16.516726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.518 [2024-11-26 07:41:16.516731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.518 [2024-11-26 07:41:16.516737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.518 [2024-11-26 07:41:16.528440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.518 [2024-11-26 07:41:16.528987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.518 [2024-11-26 07:41:16.529019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.518 [2024-11-26 07:41:16.529028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.518 [2024-11-26 07:41:16.529201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.529354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.529361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.529367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.529374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.541067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.541634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.541665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.541674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.541839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.541991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.541998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.542004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.542010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.553732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.554300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.554309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.554475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.554627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.554635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.554645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.554651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.566354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.566852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.566867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.566873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.567022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.567176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.567184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.567189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.567195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.579040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.579649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.579680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.579689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.579853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.580005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.580013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.580019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.580025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.591750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.592257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.592273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.592280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.592431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.592581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.592587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.592592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.592597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.519 [2024-11-26 07:41:16.604453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.519 [2024-11-26 07:41:16.604781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.519 [2024-11-26 07:41:16.604795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.519 [2024-11-26 07:41:16.604800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.519 [2024-11-26 07:41:16.604949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.519 [2024-11-26 07:41:16.605098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.519 [2024-11-26 07:41:16.605105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.519 [2024-11-26 07:41:16.605111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.519 [2024-11-26 07:41:16.605116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.782 [2024-11-26 07:41:16.617113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.782 [2024-11-26 07:41:16.617710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.782 [2024-11-26 07:41:16.617742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.782 [2024-11-26 07:41:16.617750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.782 [2024-11-26 07:41:16.617915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.782 [2024-11-26 07:41:16.618067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.782 [2024-11-26 07:41:16.618074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.782 [2024-11-26 07:41:16.618081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.782 [2024-11-26 07:41:16.618087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.782 [2024-11-26 07:41:16.629811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.782 [2024-11-26 07:41:16.630433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.630464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.630474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.630638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.630790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.630797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.630803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.630809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.642509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.643106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.643138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.643150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.643322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.643475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.643482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.643488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.643494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.655205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.655783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.655823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.655988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.656140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.656147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.656153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.656166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.667877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.668358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.668374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.668380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.668529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.668678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.668685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.668690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.668696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.680559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.681014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.681028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.681034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.681189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.681342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.681350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.681356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.681361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.693133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.693624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.693638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.693643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.693792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.693940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.693947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.693952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.693957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.705794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.706379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.783 [2024-11-26 07:41:16.706411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.783 [2024-11-26 07:41:16.706420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.783 [2024-11-26 07:41:16.706586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.783 [2024-11-26 07:41:16.706738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.783 [2024-11-26 07:41:16.706745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.783 [2024-11-26 07:41:16.706751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.783 [2024-11-26 07:41:16.706757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.783 [2024-11-26 07:41:16.718462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.783 [2024-11-26 07:41:16.719039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.719070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.719079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.719253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.719406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.719413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.719422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.719428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.731129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.731733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.731765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.731773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.731938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.732089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.732097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.732102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.732108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.743828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.744492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.744523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.744532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.744699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.744851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.744858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.744865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.744871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.756449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.757042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.757073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.757082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.757255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.757409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.757416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.757422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.757427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.769126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.769699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.769731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.769739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.769904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.770056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.770063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.770069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.770075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.781779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.782235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.782251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.782257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.782406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.782556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.782562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.782568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.782573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.794432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.784 [2024-11-26 07:41:16.795025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.784 [2024-11-26 07:41:16.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.784 [2024-11-26 07:41:16.795065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.784 [2024-11-26 07:41:16.795236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.784 [2024-11-26 07:41:16.795388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.784 [2024-11-26 07:41:16.795396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.784 [2024-11-26 07:41:16.795402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.784 [2024-11-26 07:41:16.795408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.784 [2024-11-26 07:41:16.807118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.807477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.807494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.807507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.807657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.807807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.807813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.807818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.807823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.785 [2024-11-26 07:41:16.819827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.820299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.820305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.820454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.820602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.820609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.820614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.820619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.785 [2024-11-26 07:41:16.832464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.832913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.832927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.832933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.833081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.833234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.833241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.833246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.833253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.785 [2024-11-26 07:41:16.845083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.845642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.845673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.845682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.845848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.846004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.846012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.846018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.846024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.785 [2024-11-26 07:41:16.857738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.858198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.858215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.858220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.858369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.858518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.858525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.858531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.858537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:48.785 [2024-11-26 07:41:16.870376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:48.785 [2024-11-26 07:41:16.870863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:48.785 [2024-11-26 07:41:16.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:48.785 [2024-11-26 07:41:16.870882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:48.785 [2024-11-26 07:41:16.871030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:48.785 [2024-11-26 07:41:16.871183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:48.785 [2024-11-26 07:41:16.871190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:48.785 [2024-11-26 07:41:16.871195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:48.785 [2024-11-26 07:41:16.871200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.049 [2024-11-26 07:41:16.883037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.049 [2024-11-26 07:41:16.883493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.049 [2024-11-26 07:41:16.883507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.049 [2024-11-26 07:41:16.883513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.049 [2024-11-26 07:41:16.883662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.049 [2024-11-26 07:41:16.883811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.049 [2024-11-26 07:41:16.883818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.049 [2024-11-26 07:41:16.883826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.049 [2024-11-26 07:41:16.883831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1641912 Killed "${NVMF_APP[@]}" "$@" 00:31:49.049 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:49.049 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:49.049 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.049 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1643502 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1643502 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1643502 ']' 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.050 [2024-11-26 07:41:16.895670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.050 [2024-11-26 07:41:16.896145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.896164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.896170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 07:41:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.050 [2024-11-26 07:41:16.896319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.896470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.896477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.896483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.896488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.908326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.908817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.908831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.908836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.908985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.909134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.909145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.909151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.909155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.920995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.921487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.921519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.921528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.921692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.921844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.921852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.921858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.921864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.933586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.934228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.934259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.934268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.934435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.934588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.934595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.934601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.934607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.946168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.946454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.946470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.946476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.946625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.946774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.946781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.946787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.946796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.948072] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:31:49.050 [2024-11-26 07:41:16.948123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.050 [2024-11-26 07:41:16.958790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.959179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.959195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.959201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.959351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.959501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.959508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.959513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.959519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.971363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.971823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.971837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.971844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.971992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.972141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.972148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.972154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.972164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.984010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.984371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.984384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.050 [2024-11-26 07:41:16.984390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.050 [2024-11-26 07:41:16.984538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.050 [2024-11-26 07:41:16.984688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.050 [2024-11-26 07:41:16.984694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.050 [2024-11-26 07:41:16.984702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.050 [2024-11-26 07:41:16.984709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.050 [2024-11-26 07:41:16.996638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.050 [2024-11-26 07:41:16.997095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.050 [2024-11-26 07:41:16.997110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:16.997115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:16.997269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:16.997419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:16.997426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:16.997432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:16.997438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.009307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.009767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.009774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.009923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.010072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.010079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.010084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.010089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.021971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.022513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.022545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.022554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.022719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.022871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.022879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.022885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.022891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.034609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.035077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.035094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.035100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.035253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.035403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.035410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.035416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.035422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.040795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:49.051 [2024-11-26 07:41:17.047261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.047638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.047652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.047657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.047806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.047956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.047963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.047969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.047974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.059828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.060228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.060242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.060248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.060396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.060545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.060552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.060557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.060563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.069992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.051 [2024-11-26 07:41:17.070016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.051 [2024-11-26 07:41:17.070022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.051 [2024-11-26 07:41:17.070031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.051 [2024-11-26 07:41:17.070035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.051 [2024-11-26 07:41:17.071193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:49.051 [2024-11-26 07:41:17.071290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:49.051 [2024-11-26 07:41:17.071399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.051 [2024-11-26 07:41:17.072404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.072880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.072894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.072899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.073048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.073203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.073210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.073215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.073220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.085085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.085671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.085707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.085716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.085887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.086040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.086048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.086054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.086061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.097781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.098367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.098400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.051 [2024-11-26 07:41:17.098409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.051 [2024-11-26 07:41:17.098579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.051 [2024-11-26 07:41:17.098731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.051 [2024-11-26 07:41:17.098739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.051 [2024-11-26 07:41:17.098751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.051 [2024-11-26 07:41:17.098758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.051 [2024-11-26 07:41:17.110469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.051 [2024-11-26 07:41:17.110820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.051 [2024-11-26 07:41:17.110836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.052 [2024-11-26 07:41:17.110842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.052 [2024-11-26 07:41:17.110992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.052 [2024-11-26 07:41:17.111141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.052 [2024-11-26 07:41:17.111148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.052 [2024-11-26 07:41:17.111153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.052 [2024-11-26 07:41:17.111165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.052 [2024-11-26 07:41:17.123149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.052 [2024-11-26 07:41:17.123768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.052 [2024-11-26 07:41:17.123801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.052 [2024-11-26 07:41:17.123810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.052 [2024-11-26 07:41:17.123977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.052 [2024-11-26 07:41:17.124130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.052 [2024-11-26 07:41:17.124138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.052 [2024-11-26 07:41:17.124144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.052 [2024-11-26 07:41:17.124150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.052 [2024-11-26 07:41:17.135726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.052 [2024-11-26 07:41:17.136268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.052 [2024-11-26 07:41:17.136301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.052 [2024-11-26 07:41:17.136309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.052 [2024-11-26 07:41:17.136475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.052 [2024-11-26 07:41:17.136627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.052 [2024-11-26 07:41:17.136634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.052 [2024-11-26 07:41:17.136640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.052 [2024-11-26 07:41:17.136646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.148359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.148842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.148858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.148864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.149013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.149167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.149175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.149180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.149186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.161045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.161631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.161663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.161672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.161837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.161989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.161996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.162002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.162008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.173717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.174228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.174245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.174251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.174400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.174550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.174556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.174561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.174567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.186286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.186797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.186808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.186958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.187107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.187113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.187119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.187124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.198964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.199519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.199551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.199560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.199725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.199877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.199885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.199890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.199896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.211649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.212183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.212189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.212338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.212488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.212495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.212501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.212506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.224345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.224798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.224812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.224817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.224966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.225119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.316 [2024-11-26 07:41:17.225126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.316 [2024-11-26 07:41:17.225131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.316 [2024-11-26 07:41:17.225136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.316 [2024-11-26 07:41:17.236980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.316 [2024-11-26 07:41:17.237502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.316 [2024-11-26 07:41:17.237534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.316 [2024-11-26 07:41:17.237543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.316 [2024-11-26 07:41:17.237709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.316 [2024-11-26 07:41:17.237861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.237868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.237875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.237881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.249592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.250122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.250138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.250144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.250300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.250451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.250458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.250463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.250468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.262180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.262765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.262797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.262806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.262971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.263123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.263131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.263141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.263147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 4858.33 IOPS, 18.98 MiB/s [2024-11-26T06:41:17.415Z] [2024-11-26 07:41:17.274924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.275508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.275540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.275549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.275713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.275865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.275873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.275879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.275885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.287604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.288185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.288216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.288224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.288392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.288543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.288551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.288557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.288564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.300278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.300743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.300759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.300765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.300914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.301063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.301070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.301075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.301080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.312930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.313472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.313504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.313513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.313678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.313831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.313838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.313844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.313850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.325564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.326129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.326166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.326176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.326343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.326495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.326503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.326509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.326514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.338211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.338818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.338849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.338858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.339022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.339181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.339190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.339196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.339201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.350896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.351512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.351544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.351556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.317 [2024-11-26 07:41:17.351721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.317 [2024-11-26 07:41:17.351873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.317 [2024-11-26 07:41:17.351880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.317 [2024-11-26 07:41:17.351886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.317 [2024-11-26 07:41:17.351892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.317 [2024-11-26 07:41:17.363469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.317 [2024-11-26 07:41:17.364033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.317 [2024-11-26 07:41:17.364064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.317 [2024-11-26 07:41:17.364073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.318 [2024-11-26 07:41:17.364245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.318 [2024-11-26 07:41:17.364398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.318 [2024-11-26 07:41:17.364405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.318 [2024-11-26 07:41:17.364411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.318 [2024-11-26 07:41:17.364416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.318 [2024-11-26 07:41:17.376119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.318 [2024-11-26 07:41:17.376678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.318 [2024-11-26 07:41:17.376710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.318 [2024-11-26 07:41:17.376719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.318 [2024-11-26 07:41:17.376884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.318 [2024-11-26 07:41:17.377036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.318 [2024-11-26 07:41:17.377043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.318 [2024-11-26 07:41:17.377050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.318 [2024-11-26 07:41:17.377056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.318 [2024-11-26 07:41:17.388757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.318 [2024-11-26 07:41:17.389126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.318 [2024-11-26 07:41:17.389142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.318 [2024-11-26 07:41:17.389148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.318 [2024-11-26 07:41:17.389301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.318 [2024-11-26 07:41:17.389455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.318 [2024-11-26 07:41:17.389462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.318 [2024-11-26 07:41:17.389468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.318 [2024-11-26 07:41:17.389473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.318 [2024-11-26 07:41:17.401444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.318 [2024-11-26 07:41:17.401898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.318 [2024-11-26 07:41:17.401911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.318 [2024-11-26 07:41:17.401916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.318 [2024-11-26 07:41:17.402065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.318 [2024-11-26 07:41:17.402219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.318 [2024-11-26 07:41:17.402226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.318 [2024-11-26 07:41:17.402231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.318 [2024-11-26 07:41:17.402236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.414067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.580 [2024-11-26 07:41:17.414491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.580 [2024-11-26 07:41:17.414504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.580 [2024-11-26 07:41:17.414510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.580 [2024-11-26 07:41:17.414659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.580 [2024-11-26 07:41:17.414808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.580 [2024-11-26 07:41:17.414814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.580 [2024-11-26 07:41:17.414820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.580 [2024-11-26 07:41:17.414824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.426679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.580 [2024-11-26 07:41:17.427180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.580 [2024-11-26 07:41:17.427195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.580 [2024-11-26 07:41:17.427201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.580 [2024-11-26 07:41:17.427349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.580 [2024-11-26 07:41:17.427498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.580 [2024-11-26 07:41:17.427505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.580 [2024-11-26 07:41:17.427517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.580 [2024-11-26 07:41:17.427522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.439354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.580 [2024-11-26 07:41:17.439803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.580 [2024-11-26 07:41:17.439816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.580 [2024-11-26 07:41:17.439821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.580 [2024-11-26 07:41:17.439969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.580 [2024-11-26 07:41:17.440118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.580 [2024-11-26 07:41:17.440125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.580 [2024-11-26 07:41:17.440131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.580 [2024-11-26 07:41:17.440135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.451971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.580 [2024-11-26 07:41:17.452527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.580 [2024-11-26 07:41:17.452558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.580 [2024-11-26 07:41:17.452567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.580 [2024-11-26 07:41:17.452732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.580 [2024-11-26 07:41:17.452883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.580 [2024-11-26 07:41:17.452891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.580 [2024-11-26 07:41:17.452897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.580 [2024-11-26 07:41:17.452903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.464611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.580 [2024-11-26 07:41:17.465181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.580 [2024-11-26 07:41:17.465212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.580 [2024-11-26 07:41:17.465221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.580 [2024-11-26 07:41:17.465387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.580 [2024-11-26 07:41:17.465538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.580 [2024-11-26 07:41:17.465545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.580 [2024-11-26 07:41:17.465550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.580 [2024-11-26 07:41:17.465557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.580 [2024-11-26 07:41:17.477262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.477795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.477827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.477836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.478000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.478152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.478167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.478176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.478184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.489889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.490499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.490531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.490540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.490704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.490857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.490864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.490870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.490876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.502574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.503084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.503099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.503105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.503258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.503409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.503415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.503421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.503426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.515256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.515751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.515765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.515774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.515923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.516072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.516079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.516084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.516089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.527918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.528465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.528497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.528506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.528670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.528822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.528830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.528836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.528842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.540538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.541034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.541066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.541075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.541247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.541400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.541407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.541413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.541418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.553115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.553591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.553607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.553612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.553761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.553914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.553922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.553927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.553932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.565809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.566468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.566500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.566509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.566674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.566825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.566833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.566839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.566845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.578416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.579007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.579038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.579047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.579219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.579372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.579380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.579386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.579392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.581 [2024-11-26 07:41:17.591092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.581 [2024-11-26 07:41:17.591675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.581 [2024-11-26 07:41:17.591707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.581 [2024-11-26 07:41:17.591716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.581 [2024-11-26 07:41:17.591881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.581 [2024-11-26 07:41:17.592032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.581 [2024-11-26 07:41:17.592040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.581 [2024-11-26 07:41:17.592049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.581 [2024-11-26 07:41:17.592055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.603761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.604302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.604334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.604343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.604510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.604662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.604669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.604675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.604680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.616381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.616830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.616862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.616871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.617035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.617196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.617204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.617211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.617217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.629078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.629597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.629614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.629619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.629769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.629918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.629925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.629931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.629936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.641775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.642388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.642419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.642428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.642593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.642745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.642752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.642758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.642764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.654472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.654931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.654947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.654953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.655102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.655263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.655270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.655277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.655282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.582 [2024-11-26 07:41:17.667168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.582 [2024-11-26 07:41:17.667644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.582 [2024-11-26 07:41:17.667659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.582 [2024-11-26 07:41:17.667665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.582 [2024-11-26 07:41:17.667813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.582 [2024-11-26 07:41:17.667963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.582 [2024-11-26 07:41:17.667969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.582 [2024-11-26 07:41:17.667975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.582 [2024-11-26 07:41:17.667980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.679819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.680466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.680498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.680512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.680677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.680837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.680846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.680852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.680858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.692413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.692979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.692989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.693155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.693314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.693322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.693328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.693334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.705031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.705631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.705662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.705671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.705835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.705988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.705996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.706001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.706008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.717704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.718290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.718322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.718330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.718495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.718651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.718658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.718665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.718671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.730366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.730910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.730941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.730950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.731115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.731274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.731283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.731289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.731296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.845 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:49.845 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.845 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.845 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.845 [2024-11-26 07:41:17.742998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.845 [2024-11-26 07:41:17.743593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.845 [2024-11-26 07:41:17.743602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.845 [2024-11-26 07:41:17.743767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.845 [2024-11-26 07:41:17.743919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.845 [2024-11-26 07:41:17.743928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.845 [2024-11-26 07:41:17.743934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.845 [2024-11-26 07:41:17.743941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.845 [2024-11-26 07:41:17.755653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.845 [2024-11-26 07:41:17.756264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.756295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.756304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.756473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.756626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.756633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.756640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.756646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 [2024-11-26 07:41:17.768230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 [2024-11-26 07:41:17.768698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.768714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.768721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.768870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.769019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.769026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.769031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.769036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:49.846 [2024-11-26 07:41:17.780884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 [2024-11-26 07:41:17.781467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.781499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.781508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.781681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.781834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.781841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.781848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.781854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 [2024-11-26 07:41:17.787436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:49.846 [2024-11-26 07:41:17.793552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 [2024-11-26 07:41:17.794176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.794207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.794216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.794383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.794535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.794542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.794549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.794554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 [2024-11-26 07:41:17.806141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 [2024-11-26 07:41:17.806755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.806786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.806795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.806960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.807112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.807120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.807126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.807132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 [2024-11-26 07:41:17.818835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 [2024-11-26 07:41:17.819408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.819439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.819449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.819616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.819768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.819776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.819781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.819787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 Malloc0 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 [2024-11-26 07:41:17.831486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 [2024-11-26 07:41:17.831991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.832007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.832013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.832166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.832316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.832323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.832328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.846 [2024-11-26 07:41:17.832333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.846 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.846 [2024-11-26 07:41:17.844056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.846 [2024-11-26 07:41:17.844397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:49.846 [2024-11-26 07:41:17.844412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f2000 with addr=10.0.0.2, port=4420 00:31:49.846 [2024-11-26 07:41:17.844418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f2000 is same with the state(6) to be set 00:31:49.846 [2024-11-26 07:41:17.844567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f2000 (9): Bad file descriptor 00:31:49.846 [2024-11-26 07:41:17.844716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:49.846 [2024-11-26 07:41:17.844723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:49.846 [2024-11-26 07:41:17.844728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:49.847 [2024-11-26 07:41:17.844733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:49.847 [2024-11-26 07:41:17.852801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.847 [2024-11-26 07:41:17.856716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.847 07:41:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1642366 00:31:49.847 [2024-11-26 07:41:17.931954] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:51.366 4808.00 IOPS, 18.78 MiB/s [2024-11-26T06:41:20.505Z] 5806.88 IOPS, 22.68 MiB/s [2024-11-26T06:41:21.447Z] 6610.22 IOPS, 25.82 MiB/s [2024-11-26T06:41:22.389Z] 7254.70 IOPS, 28.34 MiB/s [2024-11-26T06:41:23.331Z] 7769.55 IOPS, 30.35 MiB/s [2024-11-26T06:41:24.716Z] 8208.08 IOPS, 32.06 MiB/s [2024-11-26T06:41:25.658Z] 8580.77 IOPS, 33.52 MiB/s [2024-11-26T06:41:26.602Z] 8892.14 IOPS, 34.73 MiB/s [2024-11-26T06:41:26.602Z] 9163.53 IOPS, 35.80 MiB/s 00:31:58.504 Latency(us) 00:31:58.504 [2024-11-26T06:41:26.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.504 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:58.504 Verification LBA range: start 0x0 length 0x4000 00:31:58.504 Nvme1n1 : 15.01 9166.50 35.81 13468.79 0.00 5635.92 552.96 17913.17 00:31:58.504 [2024-11-26T06:41:26.602Z] =================================================================================================================== 00:31:58.504 [2024-11-26T06:41:26.602Z] Total : 9166.50 35.81 13468.79 0.00 5635.92 552.96 17913.17 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.504 rmmod nvme_tcp 00:31:58.504 rmmod nvme_fabrics 00:31:58.504 rmmod nvme_keyring 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1643502 ']' 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1643502 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1643502 ']' 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1643502 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643502 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643502' 00:31:58.504 killing process with pid 1643502 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1643502 00:31:58.504 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1643502 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.764 07:41:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.672 07:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.672 00:32:00.672 real 0m28.229s 00:32:00.672 user 1m2.995s 00:32:00.672 sys 0m7.814s 00:32:00.672 07:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.672 07:41:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:00.672 ************************************ 00:32:00.672 END TEST nvmf_bdevperf 00:32:00.672 ************************************ 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.933 ************************************ 00:32:00.933 START TEST nvmf_target_disconnect 00:32:00.933 ************************************ 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:00.933 * Looking for test storage... 00:32:00.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.933 07:41:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.933 --rc genhtml_branch_coverage=1 00:32:00.933 --rc genhtml_function_coverage=1 00:32:00.933 --rc genhtml_legend=1 00:32:00.933 --rc geninfo_all_blocks=1 00:32:00.933 --rc geninfo_unexecuted_blocks=1 00:32:00.933 00:32:00.933 ' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.933 --rc genhtml_branch_coverage=1 00:32:00.933 --rc genhtml_function_coverage=1 00:32:00.933 --rc genhtml_legend=1 00:32:00.933 --rc geninfo_all_blocks=1 00:32:00.933 --rc geninfo_unexecuted_blocks=1 00:32:00.933 00:32:00.933 ' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.933 --rc genhtml_branch_coverage=1 00:32:00.933 --rc genhtml_function_coverage=1 00:32:00.933 --rc genhtml_legend=1 00:32:00.933 --rc geninfo_all_blocks=1 00:32:00.933 --rc geninfo_unexecuted_blocks=1 00:32:00.933 00:32:00.933 ' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.933 --rc genhtml_branch_coverage=1 00:32:00.933 --rc genhtml_function_coverage=1 00:32:00.933 --rc genhtml_legend=1 00:32:00.933 --rc geninfo_all_blocks=1 00:32:00.933 --rc geninfo_unexecuted_blocks=1 00:32:00.933 00:32:00.933 ' 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.933 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:01.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.195 07:41:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:09.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:09.359 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:09.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:09.360 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:09.360 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:09.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:32:09.360 00:32:09.360 --- 10.0.0.2 ping statistics --- 00:32:09.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.360 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:32:09.360 00:32:09.360 --- 10.0.0.1 ping statistics --- 00:32:09.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.360 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:09.360 ************************************ 00:32:09.360 START TEST nvmf_target_disconnect_tc1 00:32:09.360 ************************************ 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:32:09.360 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.361 [2024-11-26 07:41:36.762768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:09.361 [2024-11-26 07:41:36.762869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcead0 with addr=10.0.0.2, port=4420 00:32:09.361 [2024-11-26 07:41:36.762900] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:09.361 [2024-11-26 07:41:36.762921] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:09.361 [2024-11-26 07:41:36.762931] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:32:09.361 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:32:09.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:32:09.361 Initializing NVMe Controllers 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:09.361 00:32:09.361 real 0m0.146s 00:32:09.361 user 0m0.063s 00:32:09.361 sys 0m0.083s 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:32:09.361 ************************************ 00:32:09.361 END TEST nvmf_target_disconnect_tc1 00:32:09.361 ************************************ 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:09.361 ************************************ 00:32:09.361 START TEST nvmf_target_disconnect_tc2 00:32:09.361 ************************************ 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1649636 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1649636 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1649636 ']' 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.361 07:41:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.361 [2024-11-26 07:41:36.926067] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:32:09.361 [2024-11-26 07:41:36.926132] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.361 [2024-11-26 07:41:37.026390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.361 [2024-11-26 07:41:37.079177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.361 [2024-11-26 07:41:37.079229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.361 [2024-11-26 07:41:37.079238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.361 [2024-11-26 07:41:37.079245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.361 [2024-11-26 07:41:37.079252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.361 [2024-11-26 07:41:37.081647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:09.361 [2024-11-26 07:41:37.081811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:09.361 [2024-11-26 07:41:37.081971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:09.361 [2024-11-26 07:41:37.081971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 Malloc0 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 [2024-11-26 07:41:37.837657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 [2024-11-26 07:41:37.878071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1649785 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:32:09.931 07:41:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:11.842 07:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1649636 00:32:11.842 07:41:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:32:11.842 Read completed with error (sct=0, sc=8) 00:32:11.842 starting I/O failed 00:32:11.842 Read completed with error (sct=0, sc=8) 00:32:11.842 starting I/O failed 00:32:11.842 Read completed with error (sct=0, sc=8) 00:32:11.842 starting I/O failed 00:32:11.842 Read completed with error (sct=0, sc=8) 00:32:11.842 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Write completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 Read completed with error (sct=0, sc=8) 00:32:11.843 starting I/O failed 00:32:11.843 [2024-11-26 07:41:39.914348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:32:11.843 [2024-11-26 07:41:39.914811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.914840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.915057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.915461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.915510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.915804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.915819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.916123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.916146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.916575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.916625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.916939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.916955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.917147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.917170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.917640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.917690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.917941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.917955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.918293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.918306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.918603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.918615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.918949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.918963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.919314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.919329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.919540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.919554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.919863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.919878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.920154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.920182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.920540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.920554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.920861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.920874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.921100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.921114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.921318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.921646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.921659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.922002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.922015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.922330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.922345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.922696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.922710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.923006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.923196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.923210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.923528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.923542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.923806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.923819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.924131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.924144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.924457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.924470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.924764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.924777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.925106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.925118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.925303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.925622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.925634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.925922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.925934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.926285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.926298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.926597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.926609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.926959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.926970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.927302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.927315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.927621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.927633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.927924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.927936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.928231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.928244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.928550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.928562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.928924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.929147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.929167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.929610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.929622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.930038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.843 [2024-11-26 07:41:39.930390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.843 [2024-11-26 07:41:39.930402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.843 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.930743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.930755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.931060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.931072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.931360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.931373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.931702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.931714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.932019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.932031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.932386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.932400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.932711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.932723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.933061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.933076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.933276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.933291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.933588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.933600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:11.844 [2024-11-26 07:41:39.933944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:11.844 [2024-11-26 07:41:39.933955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:11.844 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.934260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.934278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.934565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.934580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.934873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.934885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.935182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.935195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.935519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.935531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.935830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.935843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.936048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.936062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.936372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.936386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.936714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.936728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.937033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.937046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.937345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.937357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.937631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.937646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.937940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.937953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.938262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.938275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.938567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.938580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.938876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.938888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.939194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.939206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.939403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.939417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.939718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.940353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.940365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.940712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.940726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.941015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.941334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.941347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.941644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.124 [2024-11-26 07:41:39.941659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.124 qpair failed and we were unable to recover it. 00:32:12.124 [2024-11-26 07:41:39.941948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.941960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.942235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.942248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.942550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.942562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.942904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.942917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.943212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.943225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.943413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.943427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.943701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.943714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.944013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.944028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.944355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.944686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.944988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.945002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.945193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.945208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.945518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.945533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.945854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.946165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.946180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.946519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.946536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.946832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.946849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.947139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.947155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.947532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.947829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.947843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.948144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.948507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.948522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.948823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.948839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.949142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.949156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.949513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.949530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.949839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.949855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.950166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.950184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.950519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.950534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.950860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.950876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.951180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.951197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.951524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.951540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.951882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.951898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.952198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.952213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.952526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.952540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.952835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.952851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.953027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.953042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.953375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.125 [2024-11-26 07:41:39.953389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.125 qpair failed and we were unable to recover it. 00:32:12.125 [2024-11-26 07:41:39.953681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.953696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.954018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.954032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.954327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.954342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.954655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.954669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.955002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.955018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.955216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.955231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.955586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.955602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.955915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.956283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.956303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.956623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.956642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.956949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.956970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.957179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.957201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.957594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.957919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.958262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.958281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.958583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.958601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.958892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.958912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.959224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.959244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.959561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.959578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.959791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.959809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.960124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.960144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.960478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.960498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.960719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.960740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.961077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.961097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.961405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.961426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.961766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.961790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.962009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.962027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.962333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.962355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.962666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.962685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.963004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.963028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.963333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.963352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.963678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.963699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.964002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.964020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.964334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.964353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.126 qpair failed and we were unable to recover it. 00:32:12.126 [2024-11-26 07:41:39.964679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.126 [2024-11-26 07:41:39.964698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.964878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.964898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.965214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.965233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.965565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.965584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.965897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.965914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.966233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.966260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.966595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.966615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.966850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.966869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.967187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.967207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.967552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.967866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.967884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.968200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.968219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.968563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.968583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.969119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.969150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.969502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.969527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.969778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.969803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.970132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.970155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.970523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.970547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.970764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.970787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.971049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.971482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.971508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.971866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.971889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.972132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.972155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.972508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.972531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.972892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.972916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.973136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.973167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.973507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.973531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.973872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.973895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.974235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.974260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.974592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.974615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.974941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.974965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.976750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.976799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.977201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.977252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.977587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.977611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.977948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.977971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.978206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.978237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.127 [2024-11-26 07:41:39.978586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.127 [2024-11-26 07:41:39.978609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.127 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.978809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.979184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.979209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.979501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.979536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.979854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.980266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.980638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.980670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.981057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.981088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.981435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.981469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.981824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.981854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.982070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.982100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.982498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.982885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.982916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.983313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.983693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.983722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.984090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.984121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.984364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.984394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.984680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.984712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.985060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.985090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.985432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.985467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.985810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.985841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.986201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.986234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.986624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.986655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.986878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.986911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.987276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.987309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.987664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.987695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.988043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.988075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.988411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.988443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.988796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.988828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.989184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.989216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.989583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.989614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.989957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.989988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.990334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.990368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.990732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.990762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.991116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.991146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.991496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.991529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.128 [2024-11-26 07:41:39.991886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.128 [2024-11-26 07:41:39.991917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.128 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.992169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.992200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.992550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.992580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.992827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.992865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.993231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.993264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.993674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.993707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.994054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.994085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.994408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.994439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.994782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.994814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.995180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.995213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.995564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.995596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.995941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.995972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.996339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.996371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.996776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.997195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.997545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.997576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.997929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.997959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.998203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.998615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.998648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.999008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.999041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.999389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.999425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:39.999779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:39.999811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.000136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.000181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.000570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.000827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.000857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.001208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.001241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.001493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.001522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.001872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.129 [2024-11-26 07:41:40.001903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.129 qpair failed and we were unable to recover it. 00:32:12.129 [2024-11-26 07:41:40.002300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.002334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.003240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.003278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.003660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.003691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.003936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.003967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.004308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.004341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.004722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.004755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.004996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.005028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.005365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.005397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.005637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.005669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.005991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.006265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.006297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.006589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.006619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.006953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.006984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.007174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.007208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.007342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.007373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.007549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.007589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.007761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.007790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.008047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.008078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.008379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.008411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.008703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.008733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.009090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.009121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.009518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.009552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.009957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.009989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.010287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.010320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.010705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.010738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.011178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.011212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.011650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.011683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.012034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.012064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.012378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.012409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.012671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.012703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.012955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.012985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.013278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.130 [2024-11-26 07:41:40.013311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.130 qpair failed and we were unable to recover it. 00:32:12.130 [2024-11-26 07:41:40.013604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.013636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.013959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.013990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.014372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.014771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.014802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.015157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.015200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.015572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.015603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.015854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.015884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.016220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.016253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.016575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.016608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.016972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.017008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.017233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.017264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.017667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.017701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.018101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.018132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.018533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.018567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.018850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.019239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.019272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.019540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.019570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.019894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.019924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.020285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.020319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.020675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.020709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.021061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.021091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.021471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.021827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.021858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.022228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.022266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.022646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.022677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.023029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.023061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.023406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.023438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.023790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.023821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.024264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.024300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.024689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.024719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.024957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.025353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.025384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.025745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.026147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.026192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.026439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.026821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.026851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.027214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.027248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.131 [2024-11-26 07:41:40.027613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.131 [2024-11-26 07:41:40.027644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.131 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.027996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.028026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.028380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.028414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.028763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.028793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.029192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.029607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.029640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.030139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.030182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.030550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.030581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.030839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.030870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.031253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.031288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.031729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.031905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.031935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.032269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.032301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.032665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.032694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.033117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.033147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.033497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.033953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.033984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.034386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.034418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.034806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.034838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.035105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.035138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.035510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.035541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.035905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.035936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.036215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.036247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.036630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.036660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.036909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.036942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.037237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.037586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.037625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.038001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.038215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.038248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.038497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.038528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.038882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.038912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.039268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.039300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.039671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.039702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.040062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.040092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.040350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.040385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.040671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.040701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.041058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.041089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.041335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.041366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.132 qpair failed and we were unable to recover it. 00:32:12.132 [2024-11-26 07:41:40.041712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.132 [2024-11-26 07:41:40.041742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.042116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.042538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.042887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.042919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.043188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.043220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.043589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.043620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.043870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.043899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.044239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.044273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.044629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.044660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.044995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.045026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a4000b90 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.045298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5e00 is same with the state(6) to be set 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Read completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 Write completed with error (sct=0, sc=8) 00:32:12.133 starting I/O failed 00:32:12.133 [2024-11-26 07:41:40.046319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:12.133 [2024-11-26 07:41:40.046744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.046799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.047154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.047202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.047646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.047750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.047950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.047986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.048341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.048726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.048757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.048996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.049033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.049377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.049410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.049765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.049797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.050152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.050195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.050556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.050588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.050969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.051005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.051400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.051760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.051791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.133 qpair failed and we were unable to recover it. 00:32:12.133 [2024-11-26 07:41:40.052149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.133 [2024-11-26 07:41:40.052188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.052566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.052598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.052968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.052999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.053344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.053378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.053621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.053652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.054022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.054053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.054406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.054440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.055099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.055133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.055400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.055431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.055683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.055714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.055962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.055993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.056319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.056353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.056711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.056741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.057100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.057131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.057488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.057521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.057848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.057879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.058226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.058258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.058617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.058649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.058899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.058931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.059207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.059242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.059601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.059631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.059943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.059979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.060345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.060378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.060735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.060773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.060995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.061026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.061286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.061317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.061521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.061552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.061923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.061954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.062208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.062240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.062690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.062723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.063076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.063522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.063556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.063898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.063930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.064195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.064226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.134 [2024-11-26 07:41:40.064593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.134 [2024-11-26 07:41:40.064626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.134 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.064996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.065027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.065398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.065431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.065645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.065676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.065929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.065960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.066197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.066231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.066476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.066506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.066861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.066891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.067274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.067308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.067655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.067686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.068008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.068039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.068297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.068330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.068756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.068787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.068943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.068973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.069174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.069207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.069545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.069575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.069907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.069941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.070293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.070326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.070691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.070722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.071069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.071101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.071394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.071429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.071806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.071837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.072181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.072213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.072472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.072502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.072753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.072783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.073123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.073154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.073525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.073560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.073892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.073923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.074184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.074217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.074482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.074512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.074733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.074771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.075052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.075082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.075512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.075544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.075889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.075921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.076276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.076311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.076654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.076685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.077033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.077063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.135 [2024-11-26 07:41:40.077419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.135 [2024-11-26 07:41:40.077455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.135 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.077793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.077824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.078182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.078215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.078578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.078612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.078951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.078982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.079340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.079371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.079742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.079773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.080145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.080204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.080539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.080570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.080911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.080944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.081298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.081330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.081584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.081615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.081960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.081991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.082351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.082385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.082736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.082769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.083098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.083132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.083488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.083519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.083850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.083882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.084238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.084271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.084618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.084650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.084998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.085036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.085461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.085493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.085905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.085936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.086206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.086238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.086599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.086630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.086990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.087021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.087380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.087413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.087650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.087684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.088048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.088082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.088541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.088573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.088905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.089325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.089649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.089682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.090031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.090423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.090465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.090837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.090870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.091232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.091266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.091524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.091554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.136 [2024-11-26 07:41:40.091889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.136 [2024-11-26 07:41:40.091922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.136 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.092259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.092291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.092684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.092714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.092965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.092994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.093374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.093405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.093741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.093772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.094180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.094213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.094550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.094582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.094911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.094944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.095324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.095679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.095711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.095962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.095993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.096308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.096341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.096670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.096702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.097091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.097122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.097486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.097519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.097880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.097913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.098210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.098555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.098588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.098957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.098989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.099324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.099357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.099738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.100102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.100134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.100486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.100525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.100873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.100904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.101196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.101228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.101491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.101522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.101893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.101925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.102277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.102310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.102673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.102704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.103057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.103087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.103433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.103466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.103810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.103842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.104089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.104120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.104479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.104511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.104879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.104909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.105286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.105319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.105680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.105712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.106063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.137 [2024-11-26 07:41:40.106095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.137 qpair failed and we were unable to recover it. 00:32:12.137 [2024-11-26 07:41:40.106434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.106465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.106821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.106853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.107213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.107247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.107620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.107650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.108007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.108039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.108382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.108413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.108759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.108792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.109165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.109198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.109533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.109564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.109959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.110321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.110709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.110746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.111102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.111133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.111527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.111559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.111921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.111951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.112320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.112351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.112727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.112759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.113124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.113156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.113530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.113562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.113923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.113955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.114113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.114148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.114547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.114579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.114816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.114846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.115203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.115236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.115611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.115642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.115995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.116028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.116431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.116464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.116826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.116858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.117223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.117258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.117610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.117641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.118578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.118637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.119023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.119057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.119412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.119446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.119803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.119835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.138 qpair failed and we were unable to recover it. 00:32:12.138 [2024-11-26 07:41:40.121670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.138 [2024-11-26 07:41:40.121736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.122124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.122173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.122561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.122946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.122978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.123340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.123373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.123773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.123805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.124203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.124239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.124587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.124620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.124978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.125009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.125369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.125402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.125760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.125792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.126054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.126085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.126440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.126473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.126846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.126881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.127235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.127267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.127631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.127663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.128023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.128054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.128411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.128444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.128738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.129125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.129167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.129526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.129558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.129840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.129870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.130303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.130336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.130720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.131070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.131105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.131402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.131767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.132255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.132601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.132631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.132892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.132925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.133194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.133227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.133652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.133940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.133973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.134271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.134305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.134549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.134579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.134862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.134893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.135136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.135181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.135449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.135481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.139 [2024-11-26 07:41:40.135802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.139 [2024-11-26 07:41:40.135834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.139 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.136132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.136178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.136441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.136472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.136665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.136695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.136887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.136918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.137221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.137254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.137440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.137471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.137629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.137672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.138087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.138120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.138427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.138461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.140222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.140287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.140727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.140764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.141022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.141054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.141325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.141359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.141738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.141770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.142136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.142201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.142588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.142621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.142976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.143009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.143297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.143329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.143706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.143738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.143978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.144394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.144427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.144768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.144801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.145168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.145199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.145568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.145601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.145963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.145996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.146654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.146685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.147034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.147065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.147464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.147812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.147845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.148211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.148244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.148658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.148690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.149083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.149115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.149503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.149538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.149931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.149964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.150321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.150354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.150722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.151096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.140 [2024-11-26 07:41:40.151127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.140 qpair failed and we were unable to recover it. 00:32:12.140 [2024-11-26 07:41:40.151521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.151555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.151916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.151949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.152310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.152345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.152707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.152739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.153105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.153138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.153479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.153511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.153874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.153905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.154151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.154208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.154583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.154975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.155014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.155375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.155409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.155749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.155780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.156145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.156185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.156620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.156971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.157003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.157340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.157374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.157737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.157769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.158131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.158173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.158554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.158585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.158917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.158949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.159314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.159348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.159714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.159745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.160095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.160127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.160519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.160551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.160805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.160836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.161193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.161224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.161587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.162283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.162318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.162700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.162732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.163119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.163152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.163529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.163563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.163898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.163931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.164296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.164329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.164700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.164732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.165099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.165132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.165507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.165540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.165904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.165941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.141 [2024-11-26 07:41:40.166300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.141 [2024-11-26 07:41:40.166334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.141 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.166573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.166605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.166938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.166971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.167336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.167370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.167740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.167776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.168134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.168177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.168538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.168572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.168837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.168870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.169100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.169134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.169496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.169528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.169883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.169916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.170292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.170327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.170597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.170634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.171025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.171059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.171400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.171436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.171818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.171849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.172216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.172250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.172584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.172616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.172975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.173007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.173347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.173383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.173696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.173728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.173991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.174025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.174265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.174298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.174684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.174718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.175064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.175098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.175457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.175492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.175775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.175808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.176194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.176228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.176620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.176651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.176889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.176921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.177277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.177309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.177563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.177592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.177968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.178000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.178405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.178750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.178784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.179172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.179565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.179939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.179973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.142 [2024-11-26 07:41:40.180376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.142 qpair failed and we were unable to recover it. 00:32:12.142 [2024-11-26 07:41:40.180736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.180775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.181134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.181172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.181417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.181454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.181803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.181836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.182253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.182287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.182657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.182689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.183069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.183429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.183461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.183826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.183858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.184231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.184265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.184626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.184659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.185021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.185052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.185498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.185913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.185945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.186285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.186317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.186646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.186680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.187036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.187066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.187477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.187511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.187811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.187843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.188227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.188260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.188497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.188531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.188895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.188928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.189301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.189332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.189698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.189728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.190099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.190132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.190464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.190499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.190874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.190905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.191138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.191180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.191580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.191612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.191978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.192009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.192263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.192296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.192663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.192693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.193066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.193101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.193464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.143 [2024-11-26 07:41:40.193498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.143 qpair failed and we were unable to recover it. 00:32:12.143 [2024-11-26 07:41:40.193765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.193798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.194107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.194139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.194627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.194662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.195008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.195040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.195396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.195428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.195752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.195786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.196024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.196059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.196437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.196477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.196837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.196871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.144 [2024-11-26 07:41:40.197279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.144 [2024-11-26 07:41:40.197311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.144 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.197656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.197690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.198118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.198154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.198458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.198489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.198841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.198874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.199254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.199287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.199639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.199670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.200021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.200053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.200356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.200388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.200770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.422 [2024-11-26 07:41:40.200802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.422 qpair failed and we were unable to recover it. 00:32:12.422 [2024-11-26 07:41:40.201169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.201202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.201537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.201570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.201940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.201973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.202356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.202390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.202737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.203145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.203186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.203609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.203642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.203989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.204022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.204394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.204427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.204822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.205180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.205215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.205584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.205616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.205876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.205911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.206287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.206320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.206683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.206715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.207080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.207119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.207485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.207519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.207858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.208198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.208230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.208566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.208600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.208938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.208970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.209375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.209407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.209752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.209783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.210018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.210055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.210432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.210465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.210834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.210866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.211122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.211154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.211561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.211593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.211824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.423 [2024-11-26 07:41:40.211856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.423 qpair failed and we were unable to recover it. 00:32:12.423 [2024-11-26 07:41:40.212294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.212701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.212735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.213109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.213143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.213507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.213543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.213774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.213806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.214100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.214129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.214488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.214523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.214925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.214958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.215298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.215332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.215725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.215758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.216099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.216132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.216408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.216848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.216882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.217243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.217278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.217710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.217742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.218120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.218515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.218929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.218960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.219271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.219305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.219589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.219622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.219989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.220022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.220415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.220853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.220886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.221219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.221253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.221640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.221672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.222043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.222077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.222271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.222303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.424 [2024-11-26 07:41:40.222696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.424 [2024-11-26 07:41:40.222735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.424 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.223097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.223130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.223562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.223594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.223834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.223866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.224245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.224277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.224637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.224670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.225021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.225386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.225420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.225759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.225790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.226148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.226191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.226503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.226534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.226886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.226917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.227189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.227221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.227591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.227624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.227977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.228010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.228341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.228375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.228754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.228786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.229146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.229186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.229607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.229640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.229883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.229914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.230223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.230256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.230635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.230668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.231036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.231069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.231434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.231467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.231729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.231759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.232105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.232136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.232491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.232523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.232898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.232935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.233177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.233211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.233619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.233650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.234034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.234064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.234484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.234519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.234877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.425 qpair failed and we were unable to recover it. 00:32:12.425 [2024-11-26 07:41:40.235245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.425 [2024-11-26 07:41:40.235279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.235662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.235694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.236057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.236365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.236777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.236810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.237155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.237199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.237570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.237603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.237958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.237990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.238443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.238818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.238852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.239251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.239655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.239690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.239930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.239965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.240368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.240400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.240758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.240788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.241205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.241238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.241496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.241526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.241813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.241845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.242204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.242238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.242642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.242675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.243025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.243058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.243441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.243473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.243853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.243887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.244242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.244276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.244635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.244667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.245016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.245048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.245421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.245454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.245806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.245836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.246209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.246243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.246657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.247007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.247040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.247304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.426 [2024-11-26 07:41:40.247338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.426 qpair failed and we were unable to recover it. 00:32:12.426 [2024-11-26 07:41:40.247712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.248126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.248157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.248583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.248615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.248958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.248997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.249288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.249320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.249637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.249670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.250068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.250099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.250521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.250554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.250909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.250942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.251318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.251352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.251743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.251774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.252122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.252155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.252558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.252589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.252947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.252981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.253352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.253385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.253750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.254124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.254156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.254547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.254579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.254944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.254976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.255273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.255305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.255675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.255707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.256114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.256474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.256506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.256882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.256913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.257202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.257479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.257509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.257733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.257767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.258015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.258046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.258396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.258430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.258835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.258866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.427 qpair failed and we were unable to recover it. 00:32:12.427 [2024-11-26 07:41:40.259217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.427 [2024-11-26 07:41:40.259258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.259649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.259680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.260039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.260070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.260327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.260359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.260736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.260768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.261112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.261146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.261410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.261442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.261802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.262183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.262215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.262620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.262653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.262999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.263032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.263413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.263445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.263807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.263840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.264205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.264237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.264614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.264647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.265025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.265057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.265459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.265492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.265766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.265796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.266170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.266548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.266581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.266927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.266957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.267177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.267208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.267581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.267613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.428 [2024-11-26 07:41:40.267989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.428 [2024-11-26 07:41:40.268021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.428 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.268278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.268314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.268636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.268667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.269013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.269043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.269470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.269503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.269864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.269895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.270311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.270344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.270723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.270756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.271017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.271050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.271397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.271429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.271794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.271827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.272238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.272270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.272643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.272674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.272946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.272976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.273233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.273267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.273668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.273699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.273921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.273956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.274296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.274327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.274693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.274730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.275071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.275103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.275463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.275496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.275853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.275885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.276209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.276242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.276596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.276628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.276986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.277296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.277329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.277583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.277613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.277956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.277988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.278379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.278413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.278745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.429 [2024-11-26 07:41:40.278777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.429 qpair failed and we were unable to recover it. 00:32:12.429 [2024-11-26 07:41:40.279187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.279621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.279652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.279996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.280029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.280393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.280425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.280786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.280819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.281185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.281217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.281490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.281521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.281892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.281923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.282281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.282314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.282708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.282739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.283009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.283040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.283401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.283435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.283778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.283808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.284153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.284209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.284596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.284627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.284977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.285009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.285268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.285552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.285583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.285952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.285984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.286242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.286275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.286620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.286650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.287056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.287087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.287447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.287479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.287744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.287775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.288122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.288153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.288553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.288585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.288934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.288966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.289476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.289508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.289882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.290230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.290265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.430 qpair failed and we were unable to recover it. 00:32:12.430 [2024-11-26 07:41:40.290632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.430 [2024-11-26 07:41:40.290663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.291018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.291050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.291397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.291429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.291769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.291802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.292170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.292202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.292574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.292606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.292980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.293011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.293282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.293314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.293693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.294092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.294124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.294501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.294534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.294900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.294933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.295308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.295340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.295711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.295743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.296086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.296118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.296482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.296516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.296886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.296918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.297252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.297289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.297656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.297688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.298052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.298083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.298435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.298467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.298829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.298862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.299240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.299620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.299652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.300027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.300059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.300417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.300451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.300805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.300843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.301195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.301229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.301600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.431 [2024-11-26 07:41:40.301632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.431 qpair failed and we were unable to recover it. 00:32:12.431 [2024-11-26 07:41:40.301987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.302020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.302247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.302279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.302645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.302678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.303038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.303070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.303312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.303351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.303759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.303791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.304146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.304188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.304622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.304653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.305001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.305032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.305279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.305311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.305695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.305726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.306106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.306368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.306400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.306651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.306681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.307033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.307063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.307452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.307484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.307873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.307906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.308241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.308274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.308637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.308667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.309034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.309067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.309428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.309460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.309819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.309851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.310093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.310124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.310530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.310563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.310920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.311292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.311325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.311706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.311738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.312095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.312127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.312503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.312535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.432 [2024-11-26 07:41:40.312885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.432 [2024-11-26 07:41:40.312917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.432 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.313179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.313209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.313481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.313512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.313881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.313911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.314287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.314319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.314681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.314713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.315044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.315074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.315324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.315355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.315746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.315776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.316143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.316478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.316508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.316883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.316914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.317173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.317206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.317532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.317562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.317930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.317964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.318333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.318367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.318765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.318796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.319179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.319213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.319475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.319505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.319834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.319866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.320130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.320175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.320563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.320595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.320944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.320976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.321288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.321321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.321589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.321619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.321967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.321998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.322353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.322387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.322757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.322789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.323043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.433 [2024-11-26 07:41:40.323073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.433 qpair failed and we were unable to recover it. 00:32:12.433 [2024-11-26 07:41:40.323490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.323523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.323768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.323798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.324152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.324191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.324666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.324698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.325071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.325480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.325513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.325859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.325892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.326247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.326285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.326644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.326676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.327033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.327065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.327482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.327514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.327889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.327921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.328305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.328337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.328690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.328723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.329125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.329543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.329920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.329951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.330317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.330350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.330724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.330756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.331112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.331145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.331565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.331596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.331958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.331991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.332370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.332403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.332763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.332796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.333171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.333204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.434 qpair failed and we were unable to recover it. 00:32:12.434 [2024-11-26 07:41:40.333597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.434 [2024-11-26 07:41:40.333630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.333976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.334009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.334366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.334398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.334758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.334789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.335031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.335061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.335421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.335453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.335797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.335829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.336170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.336205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.336530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.336561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.336929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.336960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.337284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.337316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.337652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.337683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.337998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.338033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.338401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.338435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.338798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.338829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.339179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.339213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.339597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.339627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.339999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.340030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.340409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.340678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.340708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.341069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.341099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.341446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.341479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.341827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.341857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.342273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.342640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.342674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.343018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.343408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.343440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.343797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.343829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.344173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.344636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.344667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.345051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.345083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.345355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.435 qpair failed and we were unable to recover it. 00:32:12.435 [2024-11-26 07:41:40.345750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.435 [2024-11-26 07:41:40.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.346131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.346179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.346542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.346573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.346918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.346949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.347228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.347260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.347601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.347632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.347990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.348022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.348397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.348429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.348785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.348817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.349174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.349208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.349454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.349487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.349844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.349874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.350117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.350150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.350538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.350570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.350816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.350847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.351110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.351143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.351497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.351528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.351781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.351812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.352066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.352103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.352470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.352502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.352858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.352890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.353236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.353270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.353643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.353675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.353923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.353954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.354315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.354347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.354711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.355176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.355209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.355448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.355479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.355859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.436 [2024-11-26 07:41:40.355892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.436 qpair failed and we were unable to recover it. 00:32:12.436 [2024-11-26 07:41:40.356316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.356350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.356595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.356625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.356866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.356896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.357126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.357157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.357517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.357547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.357870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.357905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.358247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.358280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.358662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.358694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.359050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.359080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.359428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.359461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.359855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.360219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.360253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.360597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.360628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.360988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.361019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.361359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.361390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.361742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.361773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.362136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.362538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.362570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.362813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.362843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.363207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.363239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.363592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.363624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.363963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.363994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.364348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.364383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.364749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.364781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.365130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.365168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.365548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.365580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.365935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.365966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.366303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.366336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.366694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.366725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.437 [2024-11-26 07:41:40.367092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.437 [2024-11-26 07:41:40.367124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.437 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.367514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.367553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.367902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.367934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.368295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.368327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.368671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.368702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.369071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.369104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.369498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.369530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.369884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.369917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.370264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.370296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.370660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.370692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.371048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.371078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.371439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.371470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.371828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.371861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.372220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.372628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.372660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.373024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.373056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.373423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.373457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.373811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.373842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.374193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.374227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.374618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.374967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.374998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.375356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.375388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.375741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.375773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.376132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.376177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.376406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.376439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.376823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.376853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.377215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.377249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.377640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.377671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.377896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.377933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.378302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.378335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.438 [2024-11-26 07:41:40.378720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.438 [2024-11-26 07:41:40.378751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.438 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.379120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.379151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.379559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.379590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.379856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.379887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.380238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.380639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.380672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.381035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.381068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.381409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.381441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.381795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.381826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.382192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.382223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.382597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.382628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.382976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.383009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.383361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.383395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.383760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.383790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.384131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.384187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.384592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.384622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.384976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.385009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.385350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.385688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.385721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.386084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.386114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.386509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.386541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.386897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.386929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.387290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.387692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.387724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.388078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.388110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.388504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.388536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.388946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.388977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.389326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.389361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.389621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.389653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.389984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.390016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.390395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.390427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.439 [2024-11-26 07:41:40.390664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.439 [2024-11-26 07:41:40.390695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.439 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.390947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.390981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.391329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.391362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.391603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.391632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.392003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.392034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.392390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.392423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.392779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.392810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.393183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.393238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.393599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.393635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.393987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.394020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.394389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.394422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.394770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.394802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.395155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.395199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.395613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.395644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.395996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.396028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.396388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.396423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.396792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.396822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.397196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.397229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.397607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.397641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.398024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.398056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.398416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.398451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.398802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.398834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.399196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.399227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.399590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.399620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.399987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.400019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.400352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.400384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.400736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.440 [2024-11-26 07:41:40.400770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.440 qpair failed and we were unable to recover it. 00:32:12.440 [2024-11-26 07:41:40.401127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.401171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.401506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.401539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.401896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.401928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.402289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.402322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.402688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.402719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.403071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.403104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.403391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.403423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.403773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.403804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.404171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.404211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.404575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.404606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.404955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.404988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.405343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.405375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.405624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.406058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.406425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.406460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.406826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.407192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.407577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.407609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.407963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.407995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.408353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.408387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.408745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.408776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.409128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.409180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.409587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.409619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.409974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.410007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.410241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.410276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.410667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.410699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.411046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.411078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.411426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.411457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.411802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.411836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.412223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.412599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.441 [2024-11-26 07:41:40.412631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.441 qpair failed and we were unable to recover it. 00:32:12.441 [2024-11-26 07:41:40.412983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.413014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.413379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.413772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.413805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.414181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.414574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.414607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.414969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.415000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.415345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.415378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.415733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.415767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.416135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.416184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.416571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.416604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.416943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.416975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.417344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.417735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.417766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.418110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.418141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.418520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.418553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.418930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.418961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.419316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.419351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.419696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.419726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.420099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.420138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.420559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.420593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.420945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.420977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.421347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.421382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.421714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.421745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.422098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.422128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.422506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.422539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.422881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.422911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.423282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.423316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.423666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.423700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.424058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.424088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.424465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.424498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.442 qpair failed and we were unable to recover it. 00:32:12.442 [2024-11-26 07:41:40.424865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.442 [2024-11-26 07:41:40.424897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.425235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.425268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.425671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.425704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.426079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.426111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.426503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.426535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.426814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.426844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.427216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.427250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.427482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.427516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.427860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.427892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.428294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.430078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.430146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.430607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.430645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.430911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.430944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.431223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.431257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.431659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.431697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.432062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.432097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.432460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.432493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.432861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.432894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.433236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.433270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.433629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.433661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.433984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.434015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.434405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.434436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.434770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.434802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.435051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.435083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.435454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.435491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.435707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.435743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.436013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.436046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.436273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.443 [2024-11-26 07:41:40.436306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.443 qpair failed and we were unable to recover it. 00:32:12.443 [2024-11-26 07:41:40.436590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.436624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.436989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.437021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.437319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.437352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.437619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.437655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.437996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.438030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.438390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.438423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.438761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.438793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.439145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.439195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.439454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.439488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.439806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.439837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.440193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.440227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.440550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.440583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.440899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.440930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.441207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.441238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.441616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.441648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.442050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.442381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.442414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.442764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.442796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.443031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.443065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.443421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.443456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.443812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.443844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.444201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.444237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.444493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.444526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.444755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.444786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.445138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.445183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.445573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.445605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.445920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.445952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.446333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.446366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.446720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.447132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.447180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.444 [2024-11-26 07:41:40.447551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.444 [2024-11-26 07:41:40.447583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.444 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.447945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.447977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.448339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.448373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.448710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.448743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.449093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.449123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.449570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.449968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.450229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.450264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.450594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.450626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.451028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.451385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.451418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.451776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.451809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.452199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.452234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.452613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.452645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.453004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.453038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.453262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.453295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.453663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.453698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.454053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.454086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.454402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.454436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.454663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.454694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.455047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.455079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.455463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.455497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.455857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.455888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.456175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.456610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.457351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.457388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.457758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.457794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.458145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.458195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.458570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.458602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.458963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.458993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.445 qpair failed and we were unable to recover it. 00:32:12.445 [2024-11-26 07:41:40.459328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.445 [2024-11-26 07:41:40.459360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.459724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.459758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.460128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.460170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.460564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.460596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.460956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.460990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.461336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.461369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.461729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.461760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.462130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.462178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.462511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.462553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.462905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.462939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.463183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.463216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.463543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.463576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.463960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.463992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.464368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.464404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.464646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.464680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.465043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.465075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.465416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.465451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.465808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.465840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.466252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.466286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.466645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.466680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.467032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.467064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.468929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.468994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.469401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.469440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.469799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.469831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.470196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.446 [2024-11-26 07:41:40.470228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.446 qpair failed and we were unable to recover it. 00:32:12.446 [2024-11-26 07:41:40.470637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.470669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.471012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.471044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.471406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.471438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.471765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.471798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.472143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.472189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.472557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.472588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.472937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.472969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.473228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.473266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.473622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.473656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.474017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.474047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.474390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.474433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.474792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.474825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.475063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.475096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.475489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.475524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.475879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.475910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.476268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.476303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.476663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.476696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.477044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.477076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.477411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.477443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.477797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.477828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.478196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.478231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.478596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.478627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.478984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.479015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.479380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.479419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.479792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.479823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.480181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.480214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.480444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.480474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.480838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.480871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.481232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.481265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.481668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.481698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.482054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.482086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.482434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.482465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.482833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.482863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.447 qpair failed and we were unable to recover it. 00:32:12.447 [2024-11-26 07:41:40.483216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.447 [2024-11-26 07:41:40.483250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.483608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.483638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.483991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.484021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.484396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.484429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.484783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.484812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.485184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.485619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.486009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.486041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.486400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.486432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.486794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.486825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.487210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.487246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.487626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.487657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.488012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.488043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.488297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.488331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.488703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.488734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.489083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.489114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.489513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.489546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.489863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.489896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.490224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.490265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.490622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.490655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.491014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.491045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.491417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.491450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.491817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.491850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.492077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.492108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.492481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.492515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.492872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.492902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.493261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.493293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.493653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.493686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.494037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.494067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.494431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.448 [2024-11-26 07:41:40.494464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.448 qpair failed and we were unable to recover it. 00:32:12.448 [2024-11-26 07:41:40.494714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.494745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.495128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.495183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.495453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.495485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.495843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.495873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.496103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.496138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.496519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.496551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.496808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.496838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.497253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.497489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.497523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.449 [2024-11-26 07:41:40.497917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.449 [2024-11-26 07:41:40.497948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.449 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.498317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.498712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.498746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.499128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.499172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.499533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.499565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.499934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.499964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.500219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.500262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.500659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.500693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.501088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.501121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.501522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.501556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.501930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.501963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.502316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.502348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.502599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.502631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.502858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.502893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.503308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.503343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.503588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.503619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.504060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.504092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.504464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.504496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.504857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.504889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.505319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.505358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.505759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.505793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.506217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.506250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.506499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.506533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.506889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.506920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.507279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.507315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.507712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.507743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.508108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.508139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.508506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.508538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.728 qpair failed and we were unable to recover it. 00:32:12.728 [2024-11-26 07:41:40.508888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.728 [2024-11-26 07:41:40.508919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.509311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.509712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.510063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.510095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.510445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.510479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.510825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.510858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.511226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.511259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.511645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.511675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.512076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.512108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.513908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.513969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.514419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.514459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.516724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.516795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.517247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.517285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.517667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.517699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.518052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.518086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.518428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.518460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.518737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.519074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.519107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.519502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.519536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.519895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.519936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.520261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.520294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.520707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.520737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.521092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.521124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.521516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.521551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.522231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.522266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.522633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.522661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.523064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.523100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.523473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.523507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.523793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.523823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.524222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.524608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.524980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.525009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.525324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.525356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.525711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.525740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.526087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.526117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.729 qpair failed and we were unable to recover it. 00:32:12.729 [2024-11-26 07:41:40.526513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.729 [2024-11-26 07:41:40.526545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.526866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.526910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.527320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.527356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.527704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.527735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.528133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.528494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.528524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.528883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.528914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.529282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.529314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.529684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.529714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.530075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.530104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.530387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.530423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.530809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.530840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.531240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.531271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.531607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.531637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.532032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.532062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.532400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.532837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.532867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.533238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.533276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.533667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.534029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.534062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.534419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.534449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.534842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.534875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.535152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.535197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.535573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.535606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.535973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.536003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.536391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.536422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.536825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.536854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.537222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.537262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.537611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.537640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.538041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.538071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.538451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.538482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.538847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.538876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.539058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.539087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.539527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.539558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.539916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.539944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.540310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.540339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.730 qpair failed and we were unable to recover it. 00:32:12.730 [2024-11-26 07:41:40.540693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.730 [2024-11-26 07:41:40.540725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.541069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.541107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.541419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.541451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.541897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.541925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.542281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.542312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.542735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.542767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.543180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.543211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.543475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.543504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.543894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.543924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.544297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.544328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.544733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.544763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.545178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.545210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.545585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.545944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.545973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.546332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.546362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.546726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.546764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.547127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.547157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.547593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.547624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.548033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.548064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.548453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.548796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.548826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.549190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.549222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.549507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.549536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.549894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.549924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.550209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.550239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.550603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.550633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.551002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.551031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.551430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.551461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.551870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.551899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.552143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.552209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.552567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.552600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.553011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.553041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.553465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.553497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.553738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.553770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.554174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.554207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.554562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.554591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.554964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.554994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.731 [2024-11-26 07:41:40.555395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.731 [2024-11-26 07:41:40.555428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.731 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.555766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.555799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.556202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.556582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.556613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.556871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.556899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.557363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.557395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.557717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.557746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.558097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.558127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.558653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.558684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.559033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.559071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.559467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.559498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.559850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.559878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.560250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.560281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.560610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.560640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.561044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.561456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.561487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.561850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.561880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.562135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.562174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.562559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.562591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.562978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.563008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.563385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.563418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.563704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.563735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.564115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.564144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.564524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.564553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.564955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.564985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.565345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.565377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.565626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.565655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.565896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.565929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.566197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.566230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.566595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.566624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.566897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.566926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.567228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.567604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.568002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.568032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.568355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.568387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.732 qpair failed and we were unable to recover it. 00:32:12.732 [2024-11-26 07:41:40.568779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.732 [2024-11-26 07:41:40.568809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.569175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.569205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.569590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.569619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.569983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.570013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.570310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.570688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.570717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.571089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.571120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.571548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.571580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.571960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.571990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.572340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.572643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.572672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.573031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.573069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.573496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.573528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.573885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.573914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.574295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.574326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.574677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.574707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.574991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.575022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.575310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.575340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.575627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.575657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.575992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.576024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.576307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.576337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.576566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.576600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.576952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.576983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.577339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.577371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.577745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.577775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.578133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.578175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.578553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.578582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.578982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.579013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.579466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.579496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.579868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.579896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.580272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.580663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.580693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.580935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.580967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.581336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.581368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.581727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.581760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.733 [2024-11-26 07:41:40.582137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.733 [2024-11-26 07:41:40.582183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.733 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.582592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.582621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.582988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.583019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.583431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.583462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.583792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.583822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.584191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.584223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.584603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.584636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.585013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.585044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.585394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.585426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.585788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.585817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.586181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.586213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.586592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.586623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.586980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.587008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.587413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.587775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.587805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.588175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.588207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.588589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.588625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.589034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.589065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.589458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.589490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.589762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.590147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.590191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.590527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.590556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.590918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.590946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.591320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.591360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.591627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.591655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.592007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.592037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.592338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.592368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.592723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.592753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.593110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.593142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.593451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.593480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.593881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.593911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.594266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.594299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.594744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.594772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.595138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.595181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.595560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.595590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.595920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.595949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.596296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.596703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.596733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.597047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.597079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.734 qpair failed and we were unable to recover it. 00:32:12.734 [2024-11-26 07:41:40.597519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.734 [2024-11-26 07:41:40.597551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.597906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.597937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.598280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.598312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.598556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.598589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.598965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.598994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.599353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.599391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.599744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.599777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.600154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.600197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.600561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.600590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.600953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.600983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.601374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.601406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.601768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.601798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.602172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.602204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.602581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.602958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.602997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.603338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.603369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.603740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.603771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.604108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.604137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.604528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.604558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.604933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.604968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.605306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.605337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.605714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.605744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.606146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.606200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.606607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.606638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.607042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.607390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.607420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.607709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.607738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.608095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.608124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.608488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.608518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.608868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.608900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.609240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.609271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.609608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.609637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.610004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.610392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.610424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.610824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.610855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.611205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.611235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.611582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.611612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.611976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.612007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.735 [2024-11-26 07:41:40.612428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.735 [2024-11-26 07:41:40.612459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.735 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.612820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.612850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.613206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.613238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.613644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.613675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.614023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.614055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.614500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.614531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.614879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.614923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.615272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.615303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.615674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.615710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.616046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.616074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.616361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.616391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.616783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.617053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.617081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.617437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.617467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.617881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.617911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.618271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.618302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.618701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.618731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.619079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.619109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.619473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.619503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.619868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.620335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.620366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.620719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.620749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.621117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.621149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.621563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.621948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.621982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.622345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.622379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.622751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.622780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.623044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.623075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.623476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.623509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.623871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.623902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.624171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.624201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.624558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.624931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.624961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.625179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.625209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.625622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.625652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.626026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.626062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.626406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.626438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.736 [2024-11-26 07:41:40.626790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.736 [2024-11-26 07:41:40.626822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.736 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.627187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.627218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.627581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.627610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.627979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.628009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.628373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.628405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.628772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.628803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.629171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.629206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.629569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.629602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.629959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.629988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.630349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.630379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.630770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.630801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.631168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.631200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.631559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.631588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.632015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.632367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.632398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.632755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.632784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.633221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.633591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.633620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.633874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.633906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.634289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.634323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.634665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.634695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.635048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.635077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.635416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.635450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.635840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.635870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.636245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.636276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.636638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.636668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.637068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.637099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.637357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.637391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.637759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.637790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.638183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.638215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.638589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.638619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.638969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.638998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.737 [2024-11-26 07:41:40.639389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.737 [2024-11-26 07:41:40.639421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.737 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.639771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.639810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.640193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.640225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.640411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.640439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.640799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.640829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.641242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.641273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.641652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.641682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.642051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.642089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.642443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.642476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.642800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.642829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.643150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.643201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.643559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.643589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.643985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.644015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.644372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.644403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.644665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.644694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.645035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.645066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.645341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.645371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.645743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.645772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.646139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.646188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.646547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.646577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.646969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.647001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.647274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.647309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.647648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.647677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.648036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.648066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.648427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.648459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.648863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.648893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.649235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.649265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.649490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.649520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.649878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.649906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.650270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.650671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.650702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.651085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.651115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.651479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.651510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.651873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.651902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.652262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.652300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.652634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.652663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.653061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.653089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.653461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.653492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.653849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.738 [2024-11-26 07:41:40.653878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.738 qpair failed and we were unable to recover it. 00:32:12.738 [2024-11-26 07:41:40.654253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.654284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.654670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.654699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.655078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.655108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.655502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.655946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.656319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.656349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.656716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.656745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.657100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.657129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.657525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.657554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.657913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.657945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.658322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.658353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.658724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.658753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.659206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.659238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.659496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.659526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.659873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.659903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.660253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.660284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.660644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.660674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.661038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.661067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.661430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.661460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.661859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.661888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.662198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.662228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.662576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.662606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.662962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.662990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.663259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.663294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.663646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.663675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.664050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.664079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.664474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.664506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.664912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.664942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.665301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.665331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.665703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.665734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.666079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.666111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.666498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.666530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.666889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.666921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.667274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.667305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.667669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.667699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.668091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.668122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.668493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.739 [2024-11-26 07:41:40.668533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.739 qpair failed and we were unable to recover it. 00:32:12.739 [2024-11-26 07:41:40.668934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.669330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.669366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.669748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.669780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.670131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.670173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.670508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.670538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.670911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.670940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.671304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.671334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.671701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.671733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.671989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.672017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.672394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.672425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.672776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.672808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.673215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.673248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.673611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.673640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.673897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.673926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.674285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.674316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.674699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.674728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.675088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.675117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.675544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.675576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.675916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.675956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.676308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.676340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.676698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.676728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.677085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.677115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.677505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.677536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.677887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.677917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.678180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.678211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.678610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.678641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.679002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.679037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.679392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.679424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.679760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.679790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.680184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.680216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.680574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.680609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.680947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.680984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.681385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.681418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.681620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.681652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.682054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.682416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.682448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.682816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.682847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.683187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.740 [2024-11-26 07:41:40.683217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.740 qpair failed and we were unable to recover it. 00:32:12.740 [2024-11-26 07:41:40.683659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.683690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.684027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.684056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.684418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.684451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.684835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.685222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.685255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.685520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.685554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.685943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.685972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.686345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.686375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.686752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.686783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.687193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.687546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.687576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.687937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.687967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.688353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.688714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.688754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.689097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.689135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.689511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.689544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.689794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.689829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.690184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.690218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.690568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.690957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.690987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.691348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.691381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.691716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.691747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.691993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.692021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.692419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.692451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.692787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.692825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.693214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.693244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.693599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.693629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.694014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.695794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.695857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.696309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.696711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.697104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.697134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.697523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.697557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.697963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.697993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.698419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.698452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.698801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.698830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.741 [2024-11-26 07:41:40.699181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.741 [2024-11-26 07:41:40.699215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.741 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.699581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.699963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.700385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.700415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.700775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.700806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.701173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.701206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.701592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.701623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.701983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.702012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.702359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.702397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.702809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.703154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.703215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.703584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.703615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.704002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.704034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.704374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.704406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.704771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.704803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.705245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.705300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.705660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.705690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.706107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.706138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.706399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.706429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.706780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.706810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.707185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.707216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.707553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.707584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.707941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.707975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.708373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.708408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.708819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.708850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.709187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.709220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.709615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.709646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.710023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.710054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.710415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.710837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.710868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.711227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.711258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.711636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.711667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.712033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.712071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.742 qpair failed and we were unable to recover it. 00:32:12.742 [2024-11-26 07:41:40.712326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.742 [2024-11-26 07:41:40.712361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.712742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.712773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.713135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.713190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.713591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.713629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.713988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.714029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.714437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.714470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.714841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.715234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.715265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.715634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.715664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.716024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.716064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.716398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.716430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.716793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.716827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.717193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.717227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.717598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.717627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.717982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.718014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.718404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.718435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.718757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.718786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.719046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.720910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.720975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.721412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.721449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.721814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.721848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.722219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.722250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.722632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.722662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.722972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.723004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.723377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.723407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.723797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.724187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.724218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.724592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.724621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.724977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.725017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.725396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.725428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.725829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.725860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.726200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.726230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.726503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.726533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.726901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.726932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.727277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.727310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.727676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.727705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.728070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.728105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.728501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.743 [2024-11-26 07:41:40.728533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.743 qpair failed and we were unable to recover it. 00:32:12.743 [2024-11-26 07:41:40.728899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.728927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.729212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.729242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.729639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.729669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.729875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.729904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.730284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.730315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.730673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.730704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.731062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.731091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.731464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.731817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.731848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.732122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.732151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.732418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.732452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.732814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.732845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.733187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.733220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.733580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.733610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.734006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.734036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.734426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.734457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.734820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.734851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.735202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.735235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.735609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.735640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.736004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.736033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.736440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.736472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.736825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.736856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.737219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.737252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.737484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.737515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.737930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.737959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.738299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.738331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.738729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.738760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.739123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.739154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.739364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.739395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.739785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.739815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.740050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.740077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.740440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.740477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.740830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.740861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.741237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.741269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.741640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.741669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.742035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.744 [2024-11-26 07:41:40.742067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.744 qpair failed and we were unable to recover it. 00:32:12.744 [2024-11-26 07:41:40.742416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.742849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.742879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.743141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.743182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.743549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.743578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.743972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.744003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.744350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.744381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.744765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.744794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.745155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.745195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.745522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.745553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.745809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.745838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.746195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.746227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.746584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.746616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.746990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.747021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.747392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.747817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.747847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.748227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.748258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.748552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.748946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.748976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.749342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.749372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.749790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.749819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.750183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.750215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.750588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.750618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.750973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.751008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.751379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.751412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.751766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.751797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.752204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.752236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.752510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.752543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.752888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.752927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.753328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.753360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.753717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.753746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.754093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.754122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.754522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.754553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.754906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.754938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.755358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.755389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.755738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.755770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.756130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.756206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.756582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.756613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.756962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.745 [2024-11-26 07:41:40.756993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.745 qpair failed and we were unable to recover it. 00:32:12.745 [2024-11-26 07:41:40.757372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.757403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.757632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.757664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.758028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.758059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.758509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.758539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.758844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.758877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.759272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.759304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.759685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.759713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.759951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.759979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.760346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.760377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.760673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.760702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.761063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.761092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.761448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.761483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.761842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.761875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.762231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.762261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.762648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.763022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.763050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.763419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.763449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.763847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.763878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.764221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.764251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.764654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.764684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.765043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.765072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.765421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.765451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.765859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.765890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.766246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.766276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.766631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.766660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.767023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.767057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.767418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.767448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.767876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.768232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.768264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.768661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.768692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.769050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.769079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.769431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.769461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.769823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.769852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.770230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.770260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.770652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.771015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.771044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.771423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.771453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.771849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.746 [2024-11-26 07:41:40.771879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.746 qpair failed and we were unable to recover it. 00:32:12.746 [2024-11-26 07:41:40.772239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.772269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.772537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.772566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.772962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.772994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.773348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.773388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.773701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.773730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.774083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.774114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.774476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.774509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.774852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.774883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.775236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.775271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.775623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.775998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.776027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.776388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.776419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.776765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.776796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.777182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.777213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.777562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.777598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.777979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.778009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.778372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.778403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.778770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.778802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.779168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.779201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.779558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.779589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.779921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.779949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.780328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.780359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.780779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.780808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.781194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.781227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.781601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.781632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.781983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.782014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.782370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.782401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.782803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.782832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.783197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.783229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.783587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.783616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.783978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.784011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.784351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.784380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.784766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.784796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.747 qpair failed and we were unable to recover it. 00:32:12.747 [2024-11-26 07:41:40.785179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.747 [2024-11-26 07:41:40.785210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.785463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.785491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.785777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.785809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.786179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.786210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.786550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.786580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.786936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.786966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.787234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.787264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.787659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.787687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.788048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.788079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.788431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.788463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.788821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.788850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.789209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.789240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.789602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.789632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.789998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.790027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.790474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.790504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.790862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.790895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.791264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.791298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.791696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.791726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.792064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.792094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.792461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.792491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.792849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.792883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.793284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.793317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.793749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.794093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.794131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.794532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.794562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.794812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.794842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.795184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.795216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.795580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.795609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.795973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.796002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.796349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.796379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.796778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.796808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.797046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.797074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.797435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.797466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.797856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.797885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.798257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.798287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.798766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.798803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.799188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.799226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.799616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.799646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.800000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.748 [2024-11-26 07:41:40.800029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.748 qpair failed and we were unable to recover it. 00:32:12.748 [2024-11-26 07:41:40.800394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.800425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.800778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.800809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.801180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.801211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.801584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.801613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.802005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.802034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.802392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.802423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:12.749 [2024-11-26 07:41:40.802772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:12.749 [2024-11-26 07:41:40.802801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:12.749 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.803155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.803200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.803567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.803598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.803964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.804007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.804346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.804385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.804802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.805151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.805208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.805555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.805585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.805955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.805984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.806346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.806376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.806744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.806773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.807175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.807208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.807588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.807616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.808010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.808040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.808399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.027 [2024-11-26 07:41:40.808431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.027 qpair failed and we were unable to recover it. 00:32:13.027 [2024-11-26 07:41:40.808784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.808813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.809131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.809172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.809552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.809582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.809923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.809954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.810312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.810343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.810647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.810677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.811044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.811074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.811450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.811480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.811864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.811894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.812232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.812263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.812616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.812645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.813039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.813069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.813448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.813480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.813838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.813867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.814238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.814269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.814647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.815011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.815040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.815446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.815477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.815846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.816241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.816273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.816526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.816559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.816913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.816942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.817305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.817334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.817733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.817763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.818127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.818155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.818484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.818513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.818907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.818937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.819286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.819324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.819689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.819718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.820080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.820109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.820473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.820510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.820865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.820895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.821251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.821283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.821632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.821662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.822021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.822052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.822387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.822421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.822785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.822814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.823179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.823220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.028 qpair failed and we were unable to recover it. 00:32:13.028 [2024-11-26 07:41:40.823597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.028 [2024-11-26 07:41:40.823627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.824000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.824029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.824360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.824392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.824746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.824777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.825112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.825141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.825515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.825545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.825864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.825896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.826239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.826270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.826522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.826555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.826903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.826933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.827301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.827331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.827689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.827720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.828084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.828113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.828472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.828502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.828866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.828897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.829268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.829299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.829657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.829687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.829954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.829983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.830336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.830366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.830713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.830744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.831106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.831136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.831519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.831548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.831931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.832295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.832327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.832579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.832607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.832860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.832893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.833273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.833305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.833682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.833712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.834069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.834101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.834500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.834530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.834777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.834807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.835167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.835199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.835496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.835525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.835893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.836287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.836317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.836676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.836706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.837065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.837094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.837461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.837491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.837846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.029 [2024-11-26 07:41:40.837876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.029 qpair failed and we were unable to recover it. 00:32:13.029 [2024-11-26 07:41:40.838147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.838536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.838566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.838917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.838947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.839308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.839339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.839680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.839709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.840109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.840138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.840503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.840534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.840893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.840920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.841320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.841353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.841696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.841726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.841980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.842008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.842379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.842777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.842806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.843179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.843210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.843574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.843603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.844000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.844031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.844396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.844426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.844767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.844795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.845188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.845219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.845552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.845582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.845839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.845867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.846220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.846262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.846451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.846480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.846810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.846840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.847207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.847237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.847589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.847618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.847949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.847980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.848357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.848388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.848763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.848793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.849151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.849206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.849484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.849514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.849879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.849909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.850342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.850373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.850736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.851117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.851147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.851562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.851592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.851982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.852284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.030 [2024-11-26 07:41:40.852314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.030 qpair failed and we were unable to recover it. 00:32:13.030 [2024-11-26 07:41:40.852670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.852700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.852967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.852995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.853342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.853373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.853771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.854146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.854548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.854938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.854968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.855334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.855368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.855629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.855658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.856061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.856091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.856496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.856526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.856874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.856905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.857283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.857314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.857687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.857716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.858074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.858103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.858493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.858524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.858931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.859296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.859326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.859696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.859726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.860094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.860123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.860565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.860596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.860955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.860987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.861289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.861709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.861738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.862109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.862139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.862486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.862515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.862906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.862936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.863212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.863242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.863639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.863668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.863992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.864023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.864325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.864736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.864766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.865111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.865141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.865493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.865877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.865909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.866193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.866223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.866528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.866557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.866911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.866940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.031 [2024-11-26 07:41:40.867283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.031 [2024-11-26 07:41:40.867314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.031 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.867647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.867676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.868065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.868096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.868463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.868494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.868751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.868780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.869133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.869185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.869553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.869583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.869947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.869976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.870351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.870381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.870744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.870773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.871156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.871520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.871549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.871911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.871939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.872302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.872338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.872682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.872712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.873102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.873131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.873579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.873611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.873857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.873890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.874154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.874195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.874582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.874947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.874977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.875319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.875350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.875722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.875751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.876105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.876134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.876481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.876511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.876869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.876901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.877258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.877289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.877541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.877571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.877828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.877859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.878136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.878175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.878527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.878558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.878894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.878925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.879290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.879321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.032 qpair failed and we were unable to recover it. 00:32:13.032 [2024-11-26 07:41:40.879663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.032 [2024-11-26 07:41:40.879691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.879982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.880011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.880315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.880344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.880683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.880712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.880996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.881024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.881385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.881417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.881775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.881804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.882174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.882205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.882611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.882642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.882979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.883008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.883432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.883463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.883811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.883842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.884194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.884228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.884570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.884602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.884999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.885029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.885386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.885417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.885799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.885828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.886189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.886223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.886393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.886422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.886716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.886753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.887134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.887171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.887535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.887573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.887953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.887985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.888331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.888361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.888620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.888650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.889005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.889041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.889390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.889421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.889775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.889804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.890065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.890094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.890438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.890476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.890831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.890863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.891217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.891249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.891478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.891874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.891906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.892235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.892272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.892713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.893059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.893101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.893517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.893547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.033 [2024-11-26 07:41:40.893899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.033 [2024-11-26 07:41:40.893928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.033 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.894286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.894317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.894713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.894743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.895105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.895133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.895502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.895886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.896276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.896307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.896649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.897074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.897105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.897455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.897486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.897843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.897878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.898279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.898311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.898684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.898713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.899077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.899105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.899487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.899519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.899875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.899904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.900267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.900297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.900674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.900704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.901075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.901104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.901468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.901500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.901896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.901925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.902284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.902316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.902582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.902614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.902966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.902998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.903378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.903411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.903703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.903733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.904176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.904206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.904615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.904644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.905003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.905034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.905388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.905421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.905737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.905767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.906171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.906203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.906579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.906608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.906850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.906879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.907239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.907271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.907628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.907658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.908022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.908051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.908456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.908488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.908863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.908892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.034 qpair failed and we were unable to recover it. 00:32:13.034 [2024-11-26 07:41:40.909251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.034 [2024-11-26 07:41:40.909281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.909545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.909579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.909965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.909994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.910371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.910401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.910771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.910801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.911142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.911183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.911541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.911971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.912001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.912344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.912374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.912737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.912767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.913134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.913176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.913571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.913600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.913945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.913982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.914346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.914377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.914676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.914707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.915119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.915148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.915515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.915544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.915932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.915962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.916316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.916347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.916713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.916743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.917140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.917184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.917536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.917565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.917823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.917854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.918208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.918241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.918623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.918653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.919012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.919043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.919364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.919735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.919766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.920124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.920156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.920559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.920593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.920957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.920986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.921338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.921376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.921661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.921692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.922042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.922074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.922416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.922451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.922863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.922896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.923277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.923540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.923571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.923969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.035 [2024-11-26 07:41:40.924001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.035 qpair failed and we were unable to recover it. 00:32:13.035 [2024-11-26 07:41:40.924372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.924411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.924764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.924794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.925197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.925230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.925597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.925627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.926000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.926030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.926310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.926342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.926706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.926737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.927088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.927120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.927492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.927527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.927887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.927917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.928192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.928224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.928637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.928989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.929021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.929250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.929282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.929680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.929713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.930068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.930098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.930440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.930472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.930835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.930866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.931117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.931152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.931582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.931613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.932043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.932405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.932437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.932810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.932840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.933201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.933233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.933618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.933648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.933874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.933905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.934274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.934308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.934671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.934702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.935064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.935095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.935477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.935510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.935868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.935897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.936241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.936273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.936634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.936664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.036 qpair failed and we were unable to recover it. 00:32:13.036 [2024-11-26 07:41:40.937010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.036 [2024-11-26 07:41:40.937038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.937388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.937418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.937779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.937810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.938185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.938216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.938483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.938513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.938923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.938952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.939320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.939350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.939718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.939748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.940134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.940186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.940569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.940599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.940842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.940876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.941234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.941266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.941616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.941647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.941986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.942016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.942374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.942808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.942838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.943194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.943227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.943627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.943658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.944029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.944060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.944399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.944429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.944789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.944820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.945076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.945108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.945482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.945513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.945877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.945908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.946307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.946341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.946700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.946729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.947092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.947121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.947496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.947528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.947891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.947921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.948286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.948318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.948681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.948712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.949083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.949112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.949525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.949557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.949912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.949953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.950318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.950350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.950825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.951226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.951257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.951612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.951643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.037 [2024-11-26 07:41:40.952018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.037 [2024-11-26 07:41:40.952047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.037 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.952393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.952425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.952783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.953147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.953187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.953555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.953585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.953960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.953989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.954329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.954359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.954753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.954783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.955140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.955184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.955435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.955465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.955832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.955862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.956225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.956257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.956670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.956701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.957099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.957131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.957443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.957478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.957742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.957774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.958199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.958232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.958623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.958653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.959026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.959056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.959446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.959799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.959828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.960184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.960215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.960603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.960635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.961000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.961030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.961388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.961420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.961829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.961861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.962110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.962139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.962560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.962907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.963282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.963676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.963708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.964085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.964116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.964493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.964882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.964911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.965286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.965318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.965649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.965682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.966050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.966079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.966440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.966471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.966818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.038 [2024-11-26 07:41:40.966854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.038 qpair failed and we were unable to recover it. 00:32:13.038 [2024-11-26 07:41:40.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.967260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.967641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.967671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.968087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.968116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.968482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.968520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.968882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.968914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.969283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.969696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.969726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.970057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.970088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.970332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.970367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.970707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.970736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.971128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.971169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.971534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.971565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.971922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.971952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.972263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.972297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.972668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.972699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.973035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.973067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.973430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.973460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.973804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.973835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.974198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.974230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.974614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.974644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.975040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.975069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.975427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.975458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.975832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.975862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.976240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.976270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.976721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.976750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.977114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.977143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.977395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.977424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.977798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.977828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.978227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.978259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.978609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.978639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.979054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.979408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.979438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.979822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.979851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.980215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.980246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.980629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.980658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.981001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.981030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.981395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.981424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.981762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.981793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.039 [2024-11-26 07:41:40.982172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.039 [2024-11-26 07:41:40.982202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.039 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.982508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.982536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.982906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.982936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.983277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.983308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.983679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.983708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.984058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.984088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.984444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.984475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.984813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.984842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.985239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.985270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.985692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.985721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.986079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.986480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.986512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.986872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.986901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.987278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.987308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.987668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.987697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.988081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.988111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.988418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.988449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.988836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.988864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.989218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.989248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.989582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.989611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.989986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.990014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.990424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.990813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.990842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.991217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.991249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.991607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.991638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.992008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.992037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.992400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.992431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.992801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.992829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.993234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.993274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.993650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.993685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.994043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.994075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.994425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.994462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.994764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.995121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.995151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.995488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.995517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.995777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.995806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.996180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.996213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.996597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.996629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.996880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.040 [2024-11-26 07:41:40.996912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.040 qpair failed and we were unable to recover it. 00:32:13.040 [2024-11-26 07:41:40.997323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.997355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.997715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.997744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.998110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.998138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.998509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.998539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.998922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.998953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.999321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.999352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:40.999725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:40.999757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.000181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.000213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.000473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.000502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.000849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.000879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.001267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.001300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.001650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.001686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.002056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.002085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.002442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.002472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.002863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.002893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.003228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.003259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.003638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.003666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.004033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.004063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.004434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.004465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.004821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.004850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.005250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.005282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.005634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.005663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.006025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.006054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.006452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.006484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.006888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.007326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.007357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.007708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.007747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.008137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.008176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.008521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.008556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.008912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.008943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.009203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.009242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.009622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.009659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.009914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.009944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.010325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.041 [2024-11-26 07:41:41.010358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.041 qpair failed and we were unable to recover it. 00:32:13.041 [2024-11-26 07:41:41.010720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.010749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.011109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.011138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.011514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.011545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.011910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.011939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.012316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.012345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.012700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.012730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.013088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.013117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.013495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.013525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.013911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.013942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.014299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.014332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.014684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.014714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.015091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.015121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.015541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.015572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.015933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.015961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.016310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.016341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.016732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.016762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.017150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.017544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.017945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.017975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.018286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.018317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.018670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.018699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.019065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.019095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.019357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.019392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.019738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.019768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.020141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.020189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.020592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.020622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.020976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.021008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.021385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.021416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.021775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.021804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.022175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.022205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.022471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.022504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.022924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.023287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.023318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.023705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.023735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.024096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.024127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.024486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.024520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.024775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.024807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.025208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.025240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.042 [2024-11-26 07:41:41.025597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.042 [2024-11-26 07:41:41.025628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.042 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.025983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.026012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.026425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.026457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.026837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.027233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.027262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.027649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.027679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.028045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.028074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.028428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.028458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.028851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.028884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.029261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.029293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.029650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.029679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.030039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.030069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.030311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.030689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.030719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.031088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.031122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.031554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.031585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.031944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.031974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.032349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.032379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.032771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.032802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.033238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.033611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.033640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.034039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.034069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.034415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.034446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.034777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.034807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.035133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.035173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.035560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.035933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.035962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.036364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.036407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.036757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.036790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.037138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.037193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.037609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.037878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.037906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.038285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.038647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.038678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.039036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.039067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.039423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.039454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.039851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.039879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.040238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.040269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.040645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.043 [2024-11-26 07:41:41.040674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.043 qpair failed and we were unable to recover it. 00:32:13.043 [2024-11-26 07:41:41.040951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.040980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.041355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.041388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.041764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.041796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.042180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.042210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.042565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.042596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.042959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.042991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.043403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.043434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.043789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.043821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.044186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.044219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.044578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.044608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.045004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.045033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.045389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.045421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.045778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.045807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.046186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.046218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.046590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.046620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.046990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.047026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.047391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.047422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.048067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.048096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.048536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.048566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.048884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.048913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.049249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.049280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.049531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.049559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.049949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.050232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.050261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.050530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.050562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.050960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.050990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.051334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.051363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.051727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.051756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.052149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.052201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.052438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.052472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.052810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.052840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.053205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.053597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.053626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.053924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.053953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.054337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.054368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.054729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.054759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.054993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.055022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.044 [2024-11-26 07:41:41.055419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.044 [2024-11-26 07:41:41.055450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.044 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.055809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.055839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.056208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.056238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.056601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.056629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.056979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.057008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.057363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.057399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.057696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.057724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.058064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.058095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.058450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.058481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.058734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.058762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.059119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.059148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.059566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.059597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.059957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.059985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.060243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.060653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.060682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.060965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.060994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.061424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.061454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.061807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.061836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.062217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.062254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.062481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.062513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.062901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.063256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.063289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.063651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.063681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.064084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.064113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.064492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.064524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.064874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.065258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.065289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.065640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.065671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.066061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.066092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.066451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.066481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.066846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.066874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.067218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.067249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.067500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.067528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.067883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.067912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.068349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.045 [2024-11-26 07:41:41.068382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.045 qpair failed and we were unable to recover it. 00:32:13.045 [2024-11-26 07:41:41.068740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.068776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.069138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.069179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.069520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.069549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.069936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.069965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.070332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.070363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.070600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.070629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.070923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.070951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.071348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.071379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.071662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.071925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.071957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.072382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.072419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.072782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.072814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.073180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.073213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.073613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.073641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.074016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.074046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.074295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.074326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.074673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.074703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.075051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.075080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.075426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.075460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.075817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.075848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.076222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.076254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.076655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.076898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.076929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.077283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.077313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.077561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.077590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.077992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.078021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.078352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.078383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.078741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.078770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.079134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.079174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.079529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.079558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.079883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.079911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.080320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.080352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.080602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.080631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.081015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.081044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.081466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.081496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.081753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.081783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.082179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.082211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.082564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.082595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.046 [2024-11-26 07:41:41.083015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.046 [2024-11-26 07:41:41.083044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.046 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.083380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.083412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.083698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.083727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.084084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.084112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.084349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.084380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.084729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.084758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.085117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.085148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.085542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.085572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.086210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.086240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.086615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.086644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.086974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.087006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.087367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.087398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.087728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.087764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.088175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.088206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.088576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.088605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.088977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.089007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.089376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.089405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.089770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.089800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.090199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.090569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.090598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.090990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.091237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.091271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.091650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.091680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.092038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.092067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.092434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.092465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.092827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.092856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.093138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.093180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.093541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.093943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.093971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.094339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.094369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.094738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.094767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.095180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.095211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.095583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.095613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.095975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.096005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.096393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.096425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.096818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.096848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.097220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.097251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.097613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.047 [2024-11-26 07:41:41.097642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.047 qpair failed and we were unable to recover it. 00:32:13.047 [2024-11-26 07:41:41.097897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.097926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.098250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.098698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.098729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.099068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.099096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.099461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.099491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.099848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.099877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.100231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.100261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.100631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.100662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.100943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.100972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.101319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.101349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.101711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.101743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.102096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.102126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.048 [2024-11-26 07:41:41.102519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.048 [2024-11-26 07:41:41.102551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.048 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.102934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.102968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.103320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.103350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.103713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.103742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.104100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.104128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.104499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.104529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.104869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.104898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.105263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.105553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.105948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.105978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.106215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.106245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.106588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.106617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.106978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.107008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.107400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.107661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.107693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.108070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.108101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.108464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.108494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.108859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.108888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.109230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.109260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.109626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.109659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.110001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.110032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.110292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.110323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.110700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.110731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.327 [2024-11-26 07:41:41.111092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.327 [2024-11-26 07:41:41.111122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.327 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.111497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.111527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.111878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.111908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.112307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.112339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.112703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.112732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.113098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.113127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.113480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.113863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.113910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.114286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.114319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.114736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.114765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.115134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.115177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.115530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.115559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.115988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.116017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.116398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.116428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.116785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.117179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.117461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.117491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.117843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.117872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.118219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.118250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.118650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.118680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.119031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.119062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.119439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.119473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.119852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.119882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.120250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.120279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.120648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.120679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.121026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.121057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.121446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.121476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.121721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.121749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.122112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.122143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.122546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.122575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.122933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.122963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.123326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.123357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.123683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.123715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.124090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.124122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.124504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.124534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.124867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.124896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.125237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.125268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.125675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.328 [2024-11-26 07:41:41.125958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.328 qpair failed and we were unable to recover it. 00:32:13.328 [2024-11-26 07:41:41.126318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.126349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.126716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.126748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.127119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.127149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.127530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.127559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.127933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.127962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.128333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.128363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.128756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.128786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.129143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.129240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.129662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.129692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.129931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.129967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.130358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.130390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.130760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.130790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.131173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.131203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.131600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.131962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.131993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.132340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.132371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.132733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.132762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.133128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.133157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.133523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.133553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.133913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.133942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.134303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.134334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.134686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.134716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.135077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.135107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.135582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.135614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.135977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.136006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.136396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.136425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.136806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.136836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.137204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.137236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.137591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.137619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.137982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.138012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.138337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.138367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.138725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.138754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.138982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.139015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.139393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.139424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.139765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.139795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.140192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.140613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.140650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.329 qpair failed and we were unable to recover it. 00:32:13.329 [2024-11-26 07:41:41.140986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.329 [2024-11-26 07:41:41.141015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.141352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.141382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.141756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.141786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.142022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.142052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.142408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.142439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.142798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.142827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.143187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.143219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.143612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.143642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.144001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.144030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.144284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.144313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.144684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.144714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.145068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.145099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.145339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.145372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.145725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.145755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.146151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.146191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.146592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.146621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.146992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.147021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.147384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.147782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.147811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.148185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.148215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.148593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.148622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.148983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.149011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.149393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.149424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.149765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.149795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.150141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.150191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.150421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.150744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.150775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.151156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.151202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.151606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.151636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.152039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.152068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.152436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.152470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.152832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.152861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.153260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.153632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.153662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.154028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.154058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.154401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.154431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.154876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.154906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.155308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.155338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.155698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.330 [2024-11-26 07:41:41.155726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.330 qpair failed and we were unable to recover it. 00:32:13.330 [2024-11-26 07:41:41.156094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.156122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.156507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.156543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.156887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.156918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.157321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.157352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.157728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.157757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.158126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.158155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.158554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.158585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.158977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.159007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.159255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.159288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.159659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.159689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.160047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.160077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.160439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.160472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.160836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.160866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.161229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.161259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.161651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.161681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.162064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.162093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.162434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.162466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.162813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.162842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.163179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.163209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.163433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.163462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.163838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.163869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.164235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.164265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.164602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.164632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.165057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.165500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.165530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.165813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.166069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.166103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.166468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.166510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.166874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.166913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.167281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.167314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.167663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.167692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.168065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.168099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.168484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.168517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.168873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.168903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.331 qpair failed and we were unable to recover it. 00:32:13.331 [2024-11-26 07:41:41.169291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.331 [2024-11-26 07:41:41.169322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.169693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.169726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.170101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.170134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.170551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.170581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.170929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.171354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.171387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.171749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.171780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.172125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.172154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.172526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.172557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.172915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.172950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.173293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.173323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.173692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.173723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.174085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.174115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.174555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.174586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.174968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.174999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.175234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.175270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.175655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.175685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.176047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.176076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.176432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.176462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.176739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.176769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.177021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.177052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.177416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.177448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.177832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.177862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.178221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.178253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.178619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.178648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.179027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.179058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.179391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.179422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.179822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.179852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.180215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.180246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.180619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.180647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.181011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.181040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.181422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.181454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.181691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.181720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.182084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.182113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.182433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.182464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.332 [2024-11-26 07:41:41.182825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.332 [2024-11-26 07:41:41.182864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.332 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.183243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.183276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.183615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.183646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.184009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.184040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.184397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.184429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.184800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.184831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.185190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.185221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.185481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.185510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.185875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.185907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.186308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.186340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.186709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.186739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.187102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.187131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.187503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.187537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.187898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.187929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.188185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.188215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.188578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.188977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.189006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.189383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.189413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.189765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.189795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.190155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.190199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.190456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.190489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.190893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.190925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.191264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.191295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.191699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.192061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.192093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.192455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.192485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.192845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.192875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.193272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.193317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.193700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.193729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.194085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.194114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.194492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.194525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.194897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.194930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.195181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.195216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.195543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.195574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.195931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.195960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.196252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.196284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.196647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.196677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.197030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.197060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.197479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.197510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.333 qpair failed and we were unable to recover it. 00:32:13.333 [2024-11-26 07:41:41.197855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.333 [2024-11-26 07:41:41.197894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.198238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.198268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.198659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.198690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.199039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.199069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.199401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.199431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.199789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.199819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.200180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.200216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.200561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.200600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.200977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.201006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.201373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.201410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.201803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.201833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.202193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.202224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.202648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.202677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.203042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.203072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.203472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.203504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.203848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.203876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.204234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.204264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.204631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.204661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.205024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.205053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.205426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.205458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.205814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.205845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.206185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.206216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.206574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.206602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.207338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.207369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.207731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.207760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.208118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.208150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.208521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.208554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.208932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.208962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.209316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.209353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.209711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.209742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.210120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.210150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.210553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.210583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.210945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.210973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.211333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.211363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.211658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.211689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.212065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.212093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.212452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.212482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.212853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.334 [2024-11-26 07:41:41.212883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.334 qpair failed and we were unable to recover it. 00:32:13.334 [2024-11-26 07:41:41.213228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.213260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.213608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.213639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.214001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.214332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.214362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.214726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.214757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.215112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.215143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.215544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.215574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.215945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.216343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.216373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.216769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.216798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.217179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.217211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.217573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.217603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.217967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.217996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.218384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.218417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.218778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.218808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.219053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.219094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.219476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.219526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.219919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.219974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.220406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.220463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.220863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.220920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.221331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.221389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.221808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.221866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.222321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.222379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.222781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.222840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.223249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.223305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.223706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.223750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.224153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.224220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.224584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.224617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.224974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.225005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.225448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.225479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.225819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.225849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.226232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.226617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.226650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.227002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.227033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.227380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.227412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.227783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.228187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.228566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.228595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.335 [2024-11-26 07:41:41.228993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.335 [2024-11-26 07:41:41.229023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.335 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.229401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.229432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.229787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.229816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.230189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.230219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.230493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.230521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.230913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.230944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.231290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.231321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.231703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.231734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.232096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.232126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.232530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.232561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.232956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.232988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.233339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.233370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.233741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.233769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.234131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.234183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.234495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.234527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.234876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.234904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.235280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.235311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.235668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.236092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.236122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.236384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.236414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.236764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.237170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.237207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.237568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.237600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.237960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.237990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.238319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.238729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.238757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.239123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.239198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.239551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.239582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.239945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.239975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.240368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.240401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.240792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.240823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.241194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.241226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.241467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.241499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.241820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.336 [2024-11-26 07:41:41.241849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.336 qpair failed and we were unable to recover it. 00:32:13.336 [2024-11-26 07:41:41.242199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.242232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.242519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.242547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.242892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.242921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.243287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.243318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.243687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.243719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.244080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.244112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.244492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.244522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.244869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.244898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.245296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.245687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.245716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.246075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.246103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.246482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.246512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.246855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.246889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.247224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.247254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.247618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.247647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.248028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.248058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.248424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.248454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.248823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.248851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.249236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.249280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.249516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.249545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.249791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.249822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.250066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.250094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.250495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.250525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.250889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.250918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.251291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.251320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.251689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.251719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.252075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.252414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.252445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.252835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.252863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.253227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.253258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.253624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.253654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.254014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.254440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.254470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.254809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.254838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.255208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.255238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.255626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.255655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.256020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.256227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.256260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.337 [2024-11-26 07:41:41.256639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.337 [2024-11-26 07:41:41.256668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.337 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.257034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.257062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.257451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.257483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.257849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.257879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.258179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.258209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.258586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.258615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.258939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.258970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.259337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.259368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.259732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.259761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.260114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.260145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.260436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.260465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.260859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.260888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.261245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.261277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.261645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.261674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.262034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.262062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.262424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.262457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.262809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.262845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.263213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.263242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.263648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.263678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.264051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.264080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.264308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.264338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.264727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.264757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.265078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.265110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.265491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.265523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.265870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.265899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.266264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.266642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.266671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.267069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.267098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.267464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.267495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.267743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.267772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.268154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.268213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.268605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.268634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.268983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.269012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.269372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.269404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.269767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.269796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.270063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.270091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.270519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.270905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.270934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.271189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.338 [2024-11-26 07:41:41.271218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.338 qpair failed and we were unable to recover it. 00:32:13.338 [2024-11-26 07:41:41.271615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.271644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.271900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.271932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.272291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.272323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.272706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.272734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.273088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.273117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.273516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.273550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.273908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.273938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.274298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.274330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.274702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.274731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.275089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.275117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.275507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.275538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.275900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.275929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.276292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.276324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.276699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.276727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.277090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.277521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.277552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.277914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.277942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.278332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.278364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.278735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.278771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.279120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.279150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.279522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.279552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.279907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.279939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.280348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.280379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.280715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.280744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.281098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.281128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.281504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.281534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.281901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.281930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.282302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.282333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.282694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.282723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.282975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.283003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.283341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.283373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.283726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.283757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.284116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.284145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.284540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.284937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.285294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.285326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.285699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.285728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.286095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.286126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.339 [2024-11-26 07:41:41.286508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.339 [2024-11-26 07:41:41.286539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.339 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.286950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.287297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.287329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.287691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.287720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.288074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.288104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.288454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.288485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.288792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.288821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.289177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.289214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.289579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.289610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.289969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.290000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.290389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.290422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.290789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.290818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.291181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.291212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.291568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.291597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.291967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.291995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.292395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.292424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.292817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.292847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.293219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.293250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.293617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.293647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.294009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.294038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.294441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.294470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.294870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.294900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.295244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.295275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.295634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.295663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.296004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.296033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.296379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.296411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.296732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.296761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.297022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.297396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.297427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.297826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.297857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.298207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.298238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.298596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.298626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.298973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.299003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.299383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.299779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.299809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.300179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.300210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.300560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.300590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.300956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.301345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.301377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.340 qpair failed and we were unable to recover it. 00:32:13.340 [2024-11-26 07:41:41.301747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.340 [2024-11-26 07:41:41.301777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.302220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.302252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.302632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.302661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.303062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.303091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.303453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.303485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.303849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.303879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.304268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.304301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.304655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.304685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.305044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.305074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.305438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.305475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.305873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.305904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.306303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.306334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.306738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.306768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.307126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.307156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.307558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.307589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.307936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.307968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.308319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.308350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.308708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.308738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.309093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.309124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.309502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.309534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.309887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.310317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.310715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.310745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.311130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.311183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.311542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.311571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.311871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.312295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.312327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.312685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.312714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.313074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.313104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.313451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.313481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.313839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.313871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.314252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.314284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.314534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.314563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.314921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.341 [2024-11-26 07:41:41.314952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.341 qpair failed and we were unable to recover it. 00:32:13.341 [2024-11-26 07:41:41.315314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.315705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.315735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.316077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.316112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.316517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.316548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.316925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.316954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.317249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.317620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.317648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.318004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.318036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.318403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.318433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.318818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.318847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.319217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.319249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.319621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.319650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.319940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.319969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.320331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.320362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.320727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.320756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.321193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.321224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.321614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.321643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.322038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.322068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.322407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.322438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.322800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.322829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.323193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.323224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.323594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.323622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.324015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.324417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.324789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.324818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.325214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.325588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.325617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.325969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.325999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.326377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.326408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.326768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.326797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.327191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.327226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.327609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.327639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.328003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.328032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.328404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.328435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.328684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.328716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.329074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.329103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.329357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.329390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.329801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.329832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.330216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.342 [2024-11-26 07:41:41.330247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.342 qpair failed and we were unable to recover it. 00:32:13.342 [2024-11-26 07:41:41.330628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.330657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.330929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.330958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.331368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.331400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.331784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.331815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.332200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.332239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.332595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.332626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.332993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.333024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.333430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.333461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.333808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.333837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.334215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.334246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.334614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.334643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.335003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.335033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.335465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.335496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.335849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.335879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.336237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.336267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.336637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.336668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.337025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.337055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.337419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.337456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.337846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.338194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.338224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.338617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.338898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.338927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.339344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.339375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.339621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.339650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.340061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.340091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.340456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.340487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.340835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.340864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.341238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.341651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.341680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.342057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.342451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.342482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.342868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.343200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.343230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.343613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.343643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.344390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.344421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.344687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.344715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.345101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.345131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.343 qpair failed and we were unable to recover it. 00:32:13.343 [2024-11-26 07:41:41.345511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.343 [2024-11-26 07:41:41.345541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.345902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.345931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.346181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.346215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.346575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.346605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.346968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.346997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.347380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.347410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.347755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.347786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.348191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.348223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.348629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.348659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.349011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.349040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.349305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.349335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.349724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.349754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.350112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.350142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.350585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.350615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.350979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.351008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.351363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.351396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.351756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.351786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.352167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.352200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.352574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.352603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.353330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.353361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.353773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.353802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.354147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.354195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.354562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.354593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.354952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.354982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.355369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.355399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.355774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.355803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.356241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.356273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.356641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.356670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.357063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.357358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.357750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.357780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.358151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.358192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.358528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.358556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.358924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.358960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.359223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.359257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.359633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.359663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.360064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.360422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.344 [2024-11-26 07:41:41.360452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.344 qpair failed and we were unable to recover it. 00:32:13.344 [2024-11-26 07:41:41.360812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.360841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.361207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.361237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.361627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.361656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.362022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.362052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.362420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.362450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.362845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.362874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.363191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.363220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.363580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.363610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.363972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.364001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.364257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.364288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.364650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.364678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.365040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.365072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.365472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.365503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.365865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.365894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.366289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.366320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.366714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.366743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.367105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.367133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.367515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.367911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.367941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.368301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.368332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.368690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.368719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.369074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.369105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.369476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.369510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.369885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.369917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.370285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.370315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.370673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.370704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.371095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.371125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.371520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.371550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.371912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.371943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.372282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.372312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.372688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.372718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.373087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.373116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.373400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.373430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.373776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.345 [2024-11-26 07:41:41.373807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.345 qpair failed and we were unable to recover it. 00:32:13.345 [2024-11-26 07:41:41.374179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.374213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.374564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.374597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.374915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.374944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.375291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.375322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.375704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.375734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.376079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.376107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.376481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.376513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.376952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.376981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.377377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.377410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.377768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.377797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.378173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.378203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.378552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.378582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.378942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.378975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.379318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.379348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.379709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.379738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.380104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.380132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.380499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.380528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.380884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.380914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.381346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.381711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.381750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.382146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.382193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.382451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.382479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.382873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.382903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.383263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.383295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.383657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.383686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.384054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.384083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.384456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.384486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.384876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.384905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.385255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.385285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.385636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.385695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.386075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.386106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.386483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.386514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.386869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.386899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.387261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.387291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.387663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.387692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.388087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.388468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.388499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.388863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.388893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.346 [2024-11-26 07:41:41.389236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.346 [2024-11-26 07:41:41.389267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.346 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.389577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.389607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.389951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.389980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.390360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.390391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.390741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.390770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.391175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.391207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.391563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.391593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.391974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.392003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.392274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.392303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.392734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.392763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.393125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.393499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.393529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.393892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.393921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.394314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.394345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.394685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.394716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.395050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.395079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.395319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.395352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.395732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.395763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.396127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.396156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.396548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.396577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.396901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.396930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.397325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.397653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.397682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.398044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.398073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.398438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.398469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.398866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.398897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.399262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.399292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.399659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.399688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.400048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.400078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.400471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.400857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.400887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.401265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.401295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.401629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.401665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.347 [2024-11-26 07:41:41.402032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.347 [2024-11-26 07:41:41.402061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.347 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.402455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.402491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.402873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.402903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.403269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.403300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.403660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.403689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.404048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.404078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.404427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.404458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.404827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.404858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.405214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.405244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.405501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.405533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.405768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.405800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.406193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.406226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.406644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.406674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.407016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.407046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.407408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.407798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.407828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.408199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.408230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.408615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.408644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.409035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.409282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.409316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.409707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.410067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.410096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.410443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.410472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.410838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.410868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.411110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.411142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.411516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.411545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.411912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.630 [2024-11-26 07:41:41.411948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.630 qpair failed and we were unable to recover it. 00:32:13.630 [2024-11-26 07:41:41.412319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.412349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.412736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.412765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.413134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.413542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.413571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.413931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.413960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.414351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.414717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.414747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.415100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.415128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.415569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.415599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.415956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.415994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.416326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.416357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.416719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.416748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.417114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.417151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.417430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.417462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.417855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.417885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.418232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.418264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.418610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.418639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.419005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.419038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.419390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.419423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.419786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.419816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.420174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.420207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.420590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.420619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.420972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.421005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.421361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.421393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.421732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.421761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.422025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.422057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.422447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.422479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.422846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.422877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.423220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.423251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.423618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.423648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.423969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.423999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.424384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.424416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.424761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.424790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.425172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.425205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.425569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.425598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.425947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.425979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.426342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.426373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.631 [2024-11-26 07:41:41.426793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.631 qpair failed and we were unable to recover it. 00:32:13.631 [2024-11-26 07:41:41.427168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.427200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.427551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.427583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.427928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.427965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.428318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.428349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.428708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.428738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.429097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.429128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.429572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.429821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.430260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.430293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.430632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.430660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.431014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.431044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.431394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.431426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.431825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.431855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.432222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.432254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.432491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.432524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.432902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.432932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.433336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.433369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.433746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.433777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.434136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.434180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.434542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.434572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.434928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.434960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.435319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.435352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.435701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.435731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.436087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.436118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.436493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.436527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.436928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.436960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.437307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.437338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.437690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.437722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.438067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.438104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.438390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.438431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.438798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.438829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.439092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.439122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.439497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.439530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.439887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.439916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.440284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.440315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.440690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.440721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.441115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.441147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.441523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.632 [2024-11-26 07:41:41.441553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.632 qpair failed and we were unable to recover it. 00:32:13.632 [2024-11-26 07:41:41.441914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.441945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.442317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.442705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.442748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.443121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.443154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.443517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.443547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.443901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.444204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.444235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.444613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.444642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.444853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.444885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.445229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.445260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.445623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.445655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.446055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.446085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.446458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.446488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.446849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.446879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.447274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.447640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.447670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.448035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.448065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.448431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.448463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.448828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.448858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.449216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.449250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.449644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.449676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.450082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.450439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.450472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.450830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.450862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.451231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.451266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.451709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.451739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.452100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.452131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.452544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.452575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.452933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.452963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.453106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.453138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.453583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.453615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.453987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.454020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.454378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.454419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.454804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.454835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.455198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.455229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.455576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.455618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.455977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.456018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.456363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.456393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.633 [2024-11-26 07:41:41.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.633 [2024-11-26 07:41:41.456779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.633 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.457182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.457214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.457578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.457968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.457997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.458361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.458393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.458658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.458693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.459077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.459106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.459469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.459500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.459843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.459881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.460235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.460269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.460621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.460650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.461009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.461039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.461376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.461407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.461758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.461790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.462568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.462599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.462943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.462972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.463285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.463316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.463700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.463730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.464130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.464503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.464535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.464914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.464951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.465197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.465230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.465597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.465628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.465995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.466024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.466403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.466435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.466802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.466833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.467236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.467269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.467695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.467727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.468072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.468101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.468511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.468872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.468904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.469314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.469347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.634 qpair failed and we were unable to recover it. 00:32:13.634 [2024-11-26 07:41:41.469694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.634 [2024-11-26 07:41:41.469724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.470078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.470108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.470524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.470557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.470919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.470949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.471200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.471579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.471609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.472005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.472036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.472381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.472411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.472767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.472797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.473181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.473212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.473572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.473602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.473976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.474005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.474342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.474374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.474748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.474778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.475149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.475192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.475542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.475573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.475981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.476012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.476374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.476772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.476800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.477155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.477557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.477590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.477930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.477959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.478325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.478355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.478754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.478784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.479138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.479530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.479560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.479926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.479956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.480314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.480346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.480712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.480741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.481143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.481588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.481617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.481975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.482007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.482338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.482368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.482756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.482786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.483148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.483189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.483601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.483629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.484000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.635 [2024-11-26 07:41:41.484029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.635 qpair failed and we were unable to recover it. 00:32:13.635 [2024-11-26 07:41:41.484430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.484816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.485206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.485236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.485615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.485644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.485969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.485999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.486232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.486265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.486646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.486676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.487036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.487065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.487427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.487456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.487810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.487840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.488217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.488549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.488578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.488933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.488963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.489334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.489751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.489780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.490151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.490193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.490561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.490591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.490950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.490978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.491343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.491718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.491747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.492117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.492146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.492492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.492524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.492878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.492909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.493320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.493351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.493716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.493745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.493998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.494029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.494383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.494416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.494779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.494810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.495179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.495209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.495568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.495597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.495960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.495992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.496235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.496265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.496651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.496680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.497043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.497075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.497424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.497457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.497820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.497849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.636 [2024-11-26 07:41:41.498297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.636 [2024-11-26 07:41:41.498327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.636 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.498682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.498714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.499075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.499105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.499465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.499495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.499862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.499890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.500364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.500394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.500764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.500796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.501156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.501208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.501606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.501959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.501990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.502272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.502303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.502650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.502679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.503012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.503041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.503379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.503412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.503803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.504182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.504214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.504568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.504598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.504993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.505022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.505382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.505414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.505773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.505805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.506153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.506196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.506566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.506960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.506989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.507342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.507372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.507741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.507776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.508132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.508173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.508533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.508562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.508924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.508953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.509318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.509347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.509717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.509747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.510110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.510139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.510595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.510625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.510967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.510997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.511332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.511364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.511727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.511756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.512108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.512138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.512510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.512540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.637 [2024-11-26 07:41:41.512823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.637 [2024-11-26 07:41:41.512853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.637 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.513141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.513194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.513455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.513484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.513851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.513880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.514351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.514382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.514784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.515173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.515568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.515598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.515895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.516256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.516287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.516661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.516690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.517094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.517485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.517835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.517867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.518201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.518230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.518666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.519046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.519076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.519425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.519455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.519814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.519843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.520215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.520245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.520618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.520647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.521016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.521046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.521388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.521419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.521779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.521809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.522203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.522235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.522593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.522622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.522975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.523004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.523372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.523764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.523794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.524157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.524197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.524539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.524570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.524902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.524934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.525281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.525310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.525659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.525688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.526053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.526083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.526443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.526473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.526858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.638 [2024-11-26 07:41:41.527220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.638 [2024-11-26 07:41:41.527250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.638 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.527646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.527676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.528033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.528062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.528887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.528916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.529233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.529273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.529638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.529670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.529931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.529960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.530359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.530390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.530743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.530775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.531136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.531179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.531550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.531578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.531940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.531969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.532330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.532360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.532724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.532753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.533115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.533144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.533564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.533593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.533954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.533982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.534374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.534412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.534755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.534783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.535146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.535185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.535531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.535563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.535935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.535965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.536332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.536363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.536694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.536724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.537061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.537091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.537433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.537464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.537823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.537853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.538203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.538233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.538583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.538613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.539005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.539036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.539382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.639 [2024-11-26 07:41:41.539413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.639 qpair failed and we were unable to recover it. 00:32:13.639 [2024-11-26 07:41:41.539774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.539804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.540179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.540209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.540559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.540590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.541423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.541454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.541802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.541832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.542230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.542262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.542606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.542634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.543012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.543041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.543393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.543423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.543781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.543812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.544198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.544230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.544631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.544661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.545016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.545044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.545412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.545449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.545805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.545835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.546227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.546593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.546623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.546972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.547004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.547422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.547455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.547787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.547816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.548197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.548228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.548591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.548621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.549018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.549049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.549392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.549423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.549790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.549821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.550185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.550216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.550572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.550608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.550952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.550984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.551318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.551349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.551716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.551746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.552144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.552184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.552519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.552548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.552911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.552939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.553304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.553335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.553734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.640 [2024-11-26 07:41:41.553763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.640 qpair failed and we were unable to recover it. 00:32:13.640 [2024-11-26 07:41:41.554115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.554145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.554518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.554547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.554893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.554921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.555179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.555208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.555613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.555642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.556000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.556029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.556401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.556431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.556808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.556837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.557193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.557569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.557599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.557983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.558012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.558368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.558401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.558648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.558676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.559056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.559086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.559452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.559852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.559881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.560250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.560280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.560643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.560672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.561001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.561038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.561389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.561419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.561816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.562183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.562226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.562472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.562500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.562856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.562885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.563270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.563303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.563684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.563713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.563959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.563988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.564329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.564360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.564753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.564783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.565048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.565081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.565470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.565503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.565882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.565912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.566150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.566191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.566483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.566514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.566915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.566945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.567321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.567351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.567723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.567754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.641 [2024-11-26 07:41:41.568097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.641 [2024-11-26 07:41:41.568129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.641 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.568379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.568412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.568802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.568832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.569229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.569628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.569656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.569987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.570017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.570390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.570423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.570772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.571184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.571216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.571611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.571641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.572038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.572068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.572266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.572658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.572688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.572943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.572972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.573338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.573369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.573603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.573635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.573995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.574024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.574383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.574414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.574815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.574845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.575199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.575230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.575588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.575617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.575982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.576011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.576346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.576382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.576636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.576669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.577058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.577087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.577390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.577421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.577793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.577823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.578183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.578217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.578561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.578592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.578945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.578974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.579299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.579343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.579703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.579737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.580112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.580142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.580520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.580550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.580905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.580937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.581304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.581339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.581733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.582123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.642 [2024-11-26 07:41:41.582152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.642 qpair failed and we were unable to recover it. 00:32:13.642 [2024-11-26 07:41:41.582562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.582593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.582954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.583345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.583374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.583750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.583779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.584134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.584178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.584561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.584595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.584988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.585401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.585434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.585798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.585830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.586180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.586212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.586574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.586604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.587033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.587069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.587452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.587483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.587774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.587803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.588193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.588225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.588485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.588515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.588867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.588897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.589241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.589272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.589511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.589542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.589912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.589943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.590302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.590336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.590681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.590711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.591109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.591140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.591511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.591909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.592234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.592266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.592637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.592669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.593022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.593051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.593417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.593447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.593807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.593836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.594237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.594269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.594621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.594652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.595046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.595077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.595420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.595451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.595842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.595874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.596244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.596279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.596643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.643 [2024-11-26 07:41:41.596671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.643 qpair failed and we were unable to recover it. 00:32:13.643 [2024-11-26 07:41:41.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.597053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.597453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.597484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.597841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.597871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.598245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.598277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.598638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.598666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.598928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.598958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.599298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.599329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.599686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.599716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.600075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.600104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.600517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.600549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.600886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.600916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.601277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.601309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.601666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.602056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.602086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.602432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.602463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.602710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.602747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.603113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.603143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.603524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.603555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.603957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.604317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.604347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.604754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.604783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.605148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.605204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.605474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.605503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.605858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.605888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.606242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.606275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.606648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.606679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.607028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.607058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.607401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.607432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.607799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.607828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.608186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.608220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.608548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.608578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.608905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.608934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.609296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.609329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.609695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.644 [2024-11-26 07:41:41.609727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.644 qpair failed and we were unable to recover it. 00:32:13.644 [2024-11-26 07:41:41.610094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.610125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.610470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.610501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.610855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.610886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.611228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.611267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.611624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.611653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.612014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.612370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.612401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.612780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.612809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.613058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.613097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.613480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.613511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.613908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.613938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.614299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.614330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.614688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.614716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.615070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.615098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.615466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.615867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.616279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.616309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.616562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.616594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.616959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.616988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.617308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.617338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.617704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.617734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.618095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.618125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.618508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.618780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.618809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.619197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.619457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.619491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.619868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.620237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.620267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.620639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.620669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.621063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.621093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.621458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.621489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.621853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.621882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.622320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.622350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.622714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.622744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.623073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.623103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.623476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.623506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.623865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.623894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.624279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.645 [2024-11-26 07:41:41.624311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.645 qpair failed and we were unable to recover it. 00:32:13.645 [2024-11-26 07:41:41.624685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.624714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.625089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.625118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.625507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.625537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.625900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.625932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.626213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.626243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.626637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.626666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.627019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.627053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.627452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.627867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.628230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.628261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.628640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.628668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.629005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.629039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.629429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.629462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.629839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.629869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.630125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.630152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.630558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.630588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.630943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.630973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.631334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.631365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.631671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.631700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.632091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.632122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.632525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.632556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.632827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.632856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.633276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.633306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.633635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.633666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.634033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.634426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.634457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.634819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.634847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.635245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.635278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.635640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.635672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.636043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.636072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.636448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.636477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.636873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.636903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.637252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.637282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.637657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.637686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.638051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.638406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.638436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.638833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.639192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.646 [2024-11-26 07:41:41.639576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.646 [2024-11-26 07:41:41.639605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.646 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.639880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.639909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.640141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.640186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.640550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.640579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.640947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.640976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.641358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.641388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.641788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.641817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.642211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.642556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.642585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.642944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.642973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.643339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.643369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.643758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.643787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.644153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.644192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.644539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.644568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.644936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.644966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.645320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.645352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.645743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.645773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.646147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.646187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.646558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.646942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.646971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.647291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.647321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.647701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.647730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.648092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.648121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.648598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.648628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.648996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.649026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.649390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.649421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.649770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.649798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.650186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.650217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.650616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.650644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.651001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.651033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.651380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.651412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.651724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.651753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.652109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.652140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.652530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.652560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.652934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.652963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.653310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.653341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.653709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.653738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.654131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.654171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.647 qpair failed and we were unable to recover it. 00:32:13.647 [2024-11-26 07:41:41.654533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.647 [2024-11-26 07:41:41.654563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.654937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.654967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.655323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.655353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.655712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.655749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.656100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.656543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.656572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.656924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.656954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.657351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.657777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.657808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.658178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.658208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.658540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.658570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.658926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.658958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.659327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.659357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.659724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.659754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.659980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.660012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.660278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.660309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.660681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.660710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.661109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.661139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.661498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.661527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.661882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.661920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.662173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.662203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.662543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.662573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.662924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.662954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.663325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.663355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.663718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.663747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.664109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.664143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.664509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.664539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.664903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.664931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.665330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.665362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.665728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.665756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.666119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.666148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.666555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.666587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.666976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.667006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.667386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.667416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.667774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.667805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.668181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.668213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.668575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.668606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.648 [2024-11-26 07:41:41.668959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.648 [2024-11-26 07:41:41.668989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.648 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.669354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.669385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.669747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.669778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.670189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.670220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.670568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.670604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.670942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.670972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.671316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.671347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.671750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.671792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.672182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.672214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.672573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.672602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.672965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.672995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.673420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.673453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.673848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.673878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.674230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.674261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.674609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.674639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.674998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.675027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.675456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.675487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.675834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.675864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.676217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.676249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.676497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.676526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.676929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.676959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.677314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.677346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.677703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.677734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.678079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.678119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.678524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.678556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.678910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.678942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.679297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.679329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.679671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.679701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.680063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.680093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.680463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.680493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.680851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.680880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.681188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.681220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.681558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.681588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.681977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.682008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.682423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.682462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.649 qpair failed and we were unable to recover it. 00:32:13.649 [2024-11-26 07:41:41.682720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.649 [2024-11-26 07:41:41.682749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.683117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.683147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.683558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.683588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.683966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.683997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.684236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.684268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.684586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.684617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.684859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.684891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.685237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.685605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.685634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.685992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.686022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.686430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.686463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.686748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.686779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.687156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.687198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.687528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.687558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.687902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.687932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.688295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.688327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.688725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.688755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.689117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.689148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.689547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.689578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.689942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.689971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.690343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.690374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.690747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.690777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.691139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.691181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.691561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.691590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.691946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.691977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.692331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.692363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.692763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.693133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.693174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.693530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.693559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.693943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.693984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.694335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.694365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.694801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.694832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.695202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.695234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.695633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.695662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.696019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.696049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.696421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.696453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.696811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.696842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.697203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.697243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.650 [2024-11-26 07:41:41.697634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.650 [2024-11-26 07:41:41.697667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.650 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.697919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.697949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.698234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.698274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.698642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.698674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.699035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.699066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.699432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.699461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.699822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.699851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.700216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.700249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.700626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.700658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.701015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.651 [2024-11-26 07:41:41.701409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.651 [2024-11-26 07:41:41.701441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.651 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.701802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.701838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.702200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.702235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.702632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.702663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.703009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.703039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.703395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.703426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.703784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.703815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.704180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.704215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.704623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.704654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.704993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.705022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.705406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.705438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.705801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.705835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.706200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.706230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.706627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.706658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.707016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.707047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.707400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.707438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.707689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.707722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.708090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.708121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.708533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.708568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.928 [2024-11-26 07:41:41.708932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.928 [2024-11-26 07:41:41.708973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.928 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.709317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.709348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.709704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.709736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.710136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.710180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.710562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.711024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.711403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.711432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.711859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.712220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.712254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.712625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.712657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.713022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.713052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.713424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.713821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.713851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.714208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.714239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.714605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.714638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.714988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.715383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.715414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.715678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.715708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.716056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.716085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.716457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.716492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.716892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.716925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.717187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.717217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.717590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.717619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.717984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.718018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.718383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.718413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.718769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.719056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.719087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.719407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.719440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.719824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.719857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.720199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.720591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.720620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.720968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.720997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.721389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.721423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.721822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.721852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.722609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.722993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.723024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.723493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.723522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.929 qpair failed and we were unable to recover it. 00:32:13.929 [2024-11-26 07:41:41.723894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.929 [2024-11-26 07:41:41.723924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.724187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.724220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.724592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.724622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.724985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.725024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.725341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.725371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.725742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.726180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.726212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.726584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.726613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.726988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.727017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.727386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.727415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.727806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.727836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.728200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.728232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.728607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.728636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.728998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.729027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.729397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.729791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.729823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.730184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.730214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.730601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.730631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.730979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.731010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.731351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.731393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.731733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.731765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.732408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.732825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.733179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.733210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.733592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.733621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.733948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.733985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.734348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.734378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.734755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.734785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.735141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.735179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.735548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.735584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.735939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.735968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.736362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.736394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.736638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.736670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.736951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.736979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.737363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.737718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.737749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.738154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.930 [2024-11-26 07:41:41.738194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.930 qpair failed and we were unable to recover it. 00:32:13.930 [2024-11-26 07:41:41.738557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.738586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.738949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.738977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.739340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.739370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.739755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.739784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.740150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.740189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.740472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.740501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.740862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.740892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.741271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.741303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.741670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.741699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.742062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.742090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.742403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.742773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.742803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.743169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.743199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.743445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.743473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.743824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.743854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.744216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.744248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.744606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.744635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.745003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.745032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.745435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.745467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.745863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.745893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.746253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.746283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.746659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.746689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.746852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.746884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.747131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.747173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.747592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.747943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.748306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.748336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.748697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.748726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.749075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.749104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.749469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.749498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.749859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.749887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.750248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.750279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.750642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.750671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.751042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.751083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.751509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.751540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.751771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.751802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.752201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.752233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.752613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.752642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.931 [2024-11-26 07:41:41.753003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.931 [2024-11-26 07:41:41.753031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.931 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.753402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.753439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.753877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.753907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.754272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.754302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.754663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.754691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.755041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.755071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.755439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.755470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.755916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.755945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.756299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.756330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.756693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.756722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.757084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.757113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.757496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.757527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.757883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.757912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.758287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.758316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.758657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.758686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.758972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.759001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.759381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.759412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.759776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.759805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.760191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.760221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.760584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.760614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.761012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.761042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.761412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.761443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.761718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.761746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.762034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.762063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.762403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.762444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.762822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.763078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.763111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.763497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.763528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.763859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.763888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.764245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.764276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.764656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.764685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.765008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.765037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.765263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.765297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.765642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.765670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.766031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.766060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.766426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.766780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.766813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.932 [2024-11-26 07:41:41.767179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.932 [2024-11-26 07:41:41.767215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.932 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.767643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.767997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.768026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.768447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.768480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.768843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.768874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.769236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.769268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.769677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.769706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.770050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.770092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.770437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.770470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.770831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.770859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.771222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.771251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.771624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.771654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.772017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.772045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.772416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.772446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.772797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.773185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.773218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.773579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.773610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.774025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.774053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.774408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.774439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.774808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.774837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.775201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.775232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.775612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.775640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.775879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.775911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.776276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.776308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.776655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.776685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.777034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.777063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.777425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.777462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.777814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.777857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.778200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.778230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.778578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.778608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.778974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.779003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.933 qpair failed and we were unable to recover it. 00:32:13.933 [2024-11-26 07:41:41.779379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.933 [2024-11-26 07:41:41.779408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.779767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.779796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.780154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.780211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.780602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.780631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.780993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.781023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.781409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.781440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.781833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.781862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.782236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.782267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.782620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.782650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.783006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.783035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.783382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.783699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.783728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.784092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.784123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.784505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.784536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.784893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.784922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.785278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.785309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.785676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.785706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.786087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.786116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.786504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.786534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.786888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.786919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.787290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.787323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.787681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.787710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.788062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.788094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.788465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.788495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.788783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.788812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.789179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.789210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.789574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.789603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.789966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.789995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.790387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.790419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.790765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.790796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.791138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.791177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.791555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.791584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.791947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.791976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.792356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.792722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.792751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.793112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.793140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.793519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.934 [2024-11-26 07:41:41.793556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.934 qpair failed and we were unable to recover it. 00:32:13.934 [2024-11-26 07:41:41.793919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.793949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.794310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.794342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.794704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.794733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.795094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.795125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.795503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.795534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.795896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.795924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.796322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.796353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.796713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.796742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.797097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.797127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.797465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.797496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.797845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.797876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.798253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.798285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.798635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.798665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.799099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.799135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.799499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.799534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.799898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.799927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.800300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.800693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.800721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.801079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.801110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.801525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.801897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.802278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.802310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.802724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.802755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.803103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.803133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.803533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.803565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.803925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.803954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.804332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.804373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.804744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.804775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.805145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.805185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.805546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.805576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.805929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.805959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.806302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.806740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.806773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.807112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.807150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.807555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.807587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.807956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.807988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.808228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.935 [2024-11-26 07:41:41.808259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.935 qpair failed and we were unable to recover it. 00:32:13.935 [2024-11-26 07:41:41.808546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.808575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.808968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.808997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.809349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.809380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.809748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.809777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.810183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.810535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.810564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.810917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.810950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.811377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.811410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.811776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.811805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.812177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.812208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.812537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.812960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.812990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.813350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.813382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.813747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.813776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.814139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.814179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.814522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.814550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.814936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.814965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.815336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.815369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.815731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.815759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.816070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.816099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.816484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.816516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.816888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.816917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.817279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.817309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.817699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.817731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.818025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.818054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.818295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.818325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.818698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.818727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.819083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.819111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.819476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.819509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.819876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.819906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.820271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.820307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.820671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.820701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.821062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.821091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.821346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.821376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.821725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.821754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.822115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.822146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.822498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.936 [2024-11-26 07:41:41.822530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.936 qpair failed and we were unable to recover it. 00:32:13.936 [2024-11-26 07:41:41.822912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.822941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.823305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.823734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.824102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.824132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.824503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.824532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.824897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.824925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.825284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.825488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.825524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.825899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.825932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.826307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.826338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.826701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.826730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.827077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.827109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.827495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.827525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.827885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.827915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.828279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.828311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.828676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.828708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.829076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.829108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.829469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.829499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.829790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.829818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.830217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.830590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.830632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.830959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.830989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.831249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.831282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.831684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.831714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.832070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.832100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.832472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.832503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.832871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.832899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.833264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.833299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.833655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.833688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.834048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.834077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.834459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.834489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.937 [2024-11-26 07:41:41.834855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.937 [2024-11-26 07:41:41.834885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.937 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.835245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.835276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.835630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.835659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.836017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.836046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.836442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.836473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.836802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.836831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.837205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.837235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.837597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.837626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.838027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.838308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.838338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.838700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.838729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.839091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.839121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.839527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.839559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.839921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.839951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.840311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.840342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.840690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.840721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.841122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.841152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.841408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.841440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.841807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.841836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.842200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.842230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.842597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.842625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.843016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.843045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.843798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.843827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.844190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.844223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.844578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.844608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.844971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.845001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.845338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.845729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.845760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.846113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.846144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.846510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.846549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.846894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.846923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.847289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.847319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.847668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.848118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.848148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.938 qpair failed and we were unable to recover it. 00:32:13.938 [2024-11-26 07:41:41.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.938 [2024-11-26 07:41:41.848567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.848925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.848954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.849316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.849347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.849722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.849751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.850123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.850154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.850568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.850598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.850859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.850891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.851293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.851716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.851747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.852087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.852116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.852545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.852576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.852934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.852973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.853311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.853342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.853677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.853705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.854087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.854116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.854492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.854523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.854888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.854917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.855277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.855308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.855658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.855691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.856026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.856058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.856400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.856431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.856782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.856811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.857003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.857042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.857399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.857432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.857790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.857820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.858172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.858202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.858551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.858583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.858980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.859009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.859370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.859402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.939 qpair failed and we were unable to recover it. 00:32:13.939 [2024-11-26 07:41:41.859738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.939 [2024-11-26 07:41:41.859767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.860153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.860195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.860541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.860573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.860937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.860969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.861321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.861353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.861682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.861711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.862113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.862142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.862444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.862474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.862833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.862863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.863234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.863264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.863650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.863680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.864044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.864075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.864483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.864515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.864861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.865297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.865689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.865719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.866075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.866104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.866468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.866498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.866858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.866889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.867243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.867273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.867664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.867693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.868034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.868064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.868427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.868457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.868857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.868886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.869263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.869293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.869654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.869683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.870044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.870077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.870417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.870447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.870811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.870843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.871248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.871280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.871689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.871719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.872079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.872108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.872511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.872542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.872887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.872916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.873304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.873342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.873686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.873716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.940 [2024-11-26 07:41:41.874080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.940 [2024-11-26 07:41:41.874109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.940 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.874940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.874973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.875350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.875380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.875701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.875733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.876100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.876130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.876571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.876787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.876815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.877181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.877213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.877616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.877646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.878031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.878388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.878419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.878801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.878834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.879203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.879245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.879620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.879651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.880009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.880038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.880282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.880315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.880684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.880716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.881091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.881119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.881559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.881590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.881860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.881888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.882273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.882304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.882652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.882682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.883046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.883074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.883422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.883460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.883854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.883883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.884246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.884276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.884640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.884669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.885033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.885062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.885430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.885467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.885832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.885863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.886220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.886251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.886614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.886642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.887000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.887389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.887428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.887773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.887803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.888216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.941 [2024-11-26 07:41:41.888248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.941 qpair failed and we were unable to recover it. 00:32:13.941 [2024-11-26 07:41:41.888597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.888636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.888992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.889036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.889423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.889454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.889821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.889849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.890205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.890627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.890658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.891042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.891410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.891441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.891733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.891762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.892192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.892565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.892595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.893035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.893064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.893435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.893467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.893868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.893897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.894252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.894283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.894618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.894647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.895032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.895062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.895427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.895457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.895824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.895853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.896212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.896243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.896602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.896631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.896882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.896912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.897261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.897290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.897661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.897689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.898058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.898086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.898481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.898861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.898893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.899269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.899300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.899665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.899693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.900085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.900122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.900495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.900526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.900887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.900915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.901287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.901317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.901667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.942 [2024-11-26 07:41:41.902032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.942 [2024-11-26 07:41:41.902060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.942 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.902462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.902493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.902890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.902918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.903281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.903310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.903691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.904048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.904079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.904417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.904448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.904794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.904823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.905185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.905217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.905571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.905603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.905944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.905973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.906342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.906371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.906740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.906771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.907147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.907189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1649636 Killed "${NVMF_APP[@]}" "$@" 00:32:13.943 [2024-11-26 07:41:41.907542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.907571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.907926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.907955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:32:13.943 [2024-11-26 07:41:41.908314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.908347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:32:13.943 [2024-11-26 07:41:41.908722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.908753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.943 [2024-11-26 07:41:41.909019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.909048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:13.943 [2024-11-26 07:41:41.909312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:13.943 [2024-11-26 07:41:41.909344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.909701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.909731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.910018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.910047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.910398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.910429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.910787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.910817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.911181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.911211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.911602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.911631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.911961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.911993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.912390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.912421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.912779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.912807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.943 [2024-11-26 07:41:41.913207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.943 [2024-11-26 07:41:41.913238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.943 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.913508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.913538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.913928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.913958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.914325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.914355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.914711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.914748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.915004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.915035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.915407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.915437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.915786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.915816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.916214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.916245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.916532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.916562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.916913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.916943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.917307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1650484 00:32:13.944 [2024-11-26 07:41:41.917339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1650484 00:32:13.944 [2024-11-26 07:41:41.917706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1650484 ']' 00:32:13.944 [2024-11-26 07:41:41.918103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.918133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.944 [2024-11-26 07:41:41.918535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.918566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.944 addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.944 [2024-11-26 07:41:41.918936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.918968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.944 07:41:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:13.944 [2024-11-26 07:41:41.919339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.919371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.919741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.919771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.920132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.920181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.920544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.920575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.920974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.921005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.921359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.921391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.921768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.921797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.922096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.922127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.922525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.922559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.922915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.922945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.923336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.923368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.923749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.923781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.924195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.924228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.924669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.924702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.925064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.925094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.925366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.925401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.925808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.925841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.926200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.926233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.944 qpair failed and we were unable to recover it. 00:32:13.944 [2024-11-26 07:41:41.926616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.944 [2024-11-26 07:41:41.926648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.927016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.927050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.927450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.927482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.927837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.927868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.928245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.928277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.928697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.929089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.929128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.929595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.929631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.929983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.930014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.930367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.930398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.930754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.930787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.931026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.931057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.931520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.931552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.931916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.931949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.932311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.932710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.932739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.933103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.933133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.933626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.933990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.934022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.934381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.934411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.934779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.934811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.935184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.935214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.935607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.935639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.936000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.936030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.936440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.936475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.936832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.937208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.937240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.937605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.937638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.937999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.938031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.938430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.938460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.938825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.938855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.939143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.939186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.939461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.939494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.939880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.939909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.940251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.940291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.940659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.940689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.940898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.940931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.941231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.941261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.945 qpair failed and we were unable to recover it. 00:32:13.945 [2024-11-26 07:41:41.941662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.945 [2024-11-26 07:41:41.941693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.942093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.942124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.942517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.942552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.942783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.942815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.943182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.943213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.943599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.943628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.943991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.944025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.944373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.944405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.944794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.944824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.945198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.945238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.945520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.945550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.945905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.945983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.946320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.946352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.946754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.946786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.947126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.947155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.947449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.947478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.947744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.947776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.948016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.948046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.948422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.948455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.948815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.948887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.949267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.949515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.949547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.949814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.949846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.950243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.950277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.950658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.950688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.951063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.951095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.951438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.951860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.951890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.952265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.952298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.952694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.952724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.953105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.953135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.953527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.953771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.953801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.954186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.954218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.954474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.954506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.954790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.954818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.955229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.955642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.955672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.955928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.946 [2024-11-26 07:41:41.955960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.946 qpair failed and we were unable to recover it. 00:32:13.946 [2024-11-26 07:41:41.956391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.956421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.956819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.956848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.957221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.957252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.957547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.957576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.957848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.958259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.958290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.958656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.958686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.959077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.959107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.959385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.959415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.959782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.959814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.960193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.960224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.960507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.960540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.960894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.960926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.961329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.961366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.961722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.961752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.962184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.962216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.962615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.962645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.963048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.963081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.963427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.963464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.963846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.963875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.964235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.964276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.964606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.964639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.965008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.965038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.965495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.965526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.965942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.965972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.966366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.966407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.966768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.966808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.967206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.967237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.967652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.968087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.968119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.968498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.968530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.968912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.968942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.969298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.947 [2024-11-26 07:41:41.969329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.947 qpair failed and we were unable to recover it. 00:32:13.947 [2024-11-26 07:41:41.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.969763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.970118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.970149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.970547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.970580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.970963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.970997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.971403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.971434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.971806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.971845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.972208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.972241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.972602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.972632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.973039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.973071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.973322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.973353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.973700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.973729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.974118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.974147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.974498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.974529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.974895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.974926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.975306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.975339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.975723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.975753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.976132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.976187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.976600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.976630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.977040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.977413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.977446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.977825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.977854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.978233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.978262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.978499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.978528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.978885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.978922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.979294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.979326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.979700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.979729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.980096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.980137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.980435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.980464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.980654] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:32:13.948 [2024-11-26 07:41:41.980716] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.948 [2024-11-26 07:41:41.980821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.980850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.981233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.981264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.981642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.981671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.982076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.982114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.982497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.982530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.982909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.982939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.983221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.983251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.983503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.983534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.983897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.948 [2024-11-26 07:41:41.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.948 qpair failed and we were unable to recover it. 00:32:13.948 [2024-11-26 07:41:41.984298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.984331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.984599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.984628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.985038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.985068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.985437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.985469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.985696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.985725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.985955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.985985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.986379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.986413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.986770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.986801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.987197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.987229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.987617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.987648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.988028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.988059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.988429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.988462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.988832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.988862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.989114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.989145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.989528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.989951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.990323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.990353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.990726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.990756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.991183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.991216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.991586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.991616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.992069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.992099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.992463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.992500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.992903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.992934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.993289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.993322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.993694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.993724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.994088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.994118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.994376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.994408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.994816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.994846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.995206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.995236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.995602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.995633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.995894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.995922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.996295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.996327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.996690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.996720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.997086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.997115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.997535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.997566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.997941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.997979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.998349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.998382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.998732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.949 [2024-11-26 07:41:41.998760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.949 qpair failed and we were unable to recover it. 00:32:13.949 [2024-11-26 07:41:41.999134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:41.999186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:41.999559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:41.999591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:41.999974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.000003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.000277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.000307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.000587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.000616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.000992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.001027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.001456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.001488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.001859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.001888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.002267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.002299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.002697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.002727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.003147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.003189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:13.950 [2024-11-26 07:41:42.003560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:13.950 [2024-11-26 07:41:42.003589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:13.950 qpair failed and we were unable to recover it. 00:32:14.228 [2024-11-26 07:41:42.003958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.228 [2024-11-26 07:41:42.003990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.228 qpair failed and we were unable to recover it. 00:32:14.228 [2024-11-26 07:41:42.004340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.228 [2024-11-26 07:41:42.004372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.228 qpair failed and we were unable to recover it. 00:32:14.228 [2024-11-26 07:41:42.004574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.228 [2024-11-26 07:41:42.004610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.228 qpair failed and we were unable to recover it. 00:32:14.228 [2024-11-26 07:41:42.004841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.004871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.005253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.005284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.005547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.005576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.005826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.005856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.006105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.006134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.006549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.006578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.006940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.006971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.007374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.007405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.007771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.007801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.008183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.008221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.008580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.008608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.009014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.009044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.009393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.009427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.009786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.009815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.010191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.010222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.010574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.010604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.010834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.010869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.011226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.011256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.011653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.011684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.012053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.012082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.012454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.012484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.012736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.012765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.013146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.013191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.013361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.013389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.013812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.014190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.014223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.014531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.014910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.014942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.015317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.015349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.015735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.015765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.016148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.016192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.016428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.016462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.016832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.017229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.017259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.017627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.017663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.018067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.018096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.018473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.018505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.018760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.018791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.019154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.019197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.019571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.019601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.019970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.020378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.020408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.020676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.020709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.021059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.021088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.021447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.021479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.021844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.021874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.022139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.022180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.022619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.022649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.023024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.023054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.023305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.023337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.023705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.023735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.024101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.024130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.024554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.229 [2024-11-26 07:41:42.024829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.229 [2024-11-26 07:41:42.024861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.229 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.025232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.025264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.025638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.025668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.026035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.026064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.026465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.026496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.026866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.026898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.027261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.027292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.027653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.027683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.027997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.028031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.028375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.028405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.028761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.028790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.029063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.029096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.029468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.029498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.029872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.029903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.030232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.030263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.030518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.030547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.030921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.031280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.031311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.031681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.031710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.032053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.032091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.032489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.032521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.032893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.032925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.033289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.033321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.033661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.033690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.034022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.034060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.034419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.034450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.034787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.034817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.035072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.035102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.035473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.035509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.035907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.036306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.036338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.036710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.036741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.037088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.037117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.037500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.037530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.037889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.038249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.038280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.038622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.038652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.038943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.038974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.039217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.039251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.039620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.039650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.040009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.040038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.040400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.040434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.040783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.040814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.041190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.041223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.041606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.041635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.041872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.041901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.042189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.042220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.042593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.042621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.042856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.042887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.043270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.043632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.043661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.044030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.044061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.044434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.044467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.230 [2024-11-26 07:41:42.044811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.230 [2024-11-26 07:41:42.044847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.230 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.045215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.045568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.045601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.045826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.045857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.046108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.046137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.046318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.046351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.046740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.046770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.047088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.047126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.047428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.047459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.047771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.047802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.048196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.048228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.048604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.048632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.048996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.049077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.049410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.049444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.049792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.049824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.050084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.050117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.050501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.050534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.050866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.050897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.051265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.051299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.051579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.051609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.051930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.051963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.052292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.052323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.052612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.052642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.053004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.053033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.053247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.053277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.053549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.053577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.053946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.053978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.054329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.054361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.054753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.054781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.055172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.055560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.055591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.055898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.055926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.056309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.056340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.056735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.057000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.057029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.057394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.057426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.057832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.057861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.058222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.058252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.058612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.058641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.058976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.059013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.059384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.059415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.059765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.059795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.060039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.060068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.060420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.060454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.060824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.060854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.061210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.061241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.061624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.061655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.062006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.062035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.062440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.062470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.062823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.062852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.063145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.063587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.063617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.063990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.064019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.064409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.064440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.231 [2024-11-26 07:41:42.064804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.231 [2024-11-26 07:41:42.064834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.231 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.065187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.065219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.065580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.065609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.065931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.065962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.066329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.066361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.066710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.066739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.067096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.067124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.067517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.067548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.067910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.067943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.068288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.068318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.068525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.068554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.068915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.068944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.069271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.069300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.069722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.070097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.070126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.070505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.070534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.070905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.070934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.071294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.071326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.071694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.071723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.072083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.072112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.072392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.072426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.072748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.072776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.072999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.073032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.073390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.073421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.073781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.073810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.074191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.074222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.074582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.074618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.074822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.074851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.075205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.075235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.075594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.075623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.075992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.076020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.076398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.076430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.076800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.076830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.077190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.077221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.077578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.077606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.077951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.078205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.078235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.078571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.078605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.078941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.078971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.079333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.079364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.079692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.079720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.079901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:14.232 [2024-11-26 07:41:42.080033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.080064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.080409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.080442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.080812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.080842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.081217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.081248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.081637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.232 [2024-11-26 07:41:42.081666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.232 qpair failed and we were unable to recover it. 00:32:14.232 [2024-11-26 07:41:42.081979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.082008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.082408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.082440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.082814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.082843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.083207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.083587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.083836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.083864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.084254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.084286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.084678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.084709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.085086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.085115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.085492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.085523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.085889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.085922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.086295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.086326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.086623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.086652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.087011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.087043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.087416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.087446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.087803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.087834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.088092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.088121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.088534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.088565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.088948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.088979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.089334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.089367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.089749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.089778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.090035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.090065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.090438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.090471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.090812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.090843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.091226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.091258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.091638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.091666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.092063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.092414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.092446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.092823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.092854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.093223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.093254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.093649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.093678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.093913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.093944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.094299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.094331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.094735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.094766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.095139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.095185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.095551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.095583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.095827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.095856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.096084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.096118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.096512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.096895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.096925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.097289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.097320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.097645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.097675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.098050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.098080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.098460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.098490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.098865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.098895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.099242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.099273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.099650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.100017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.100047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.100463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.100820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.100857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.101228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.101260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.101621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.101652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.102016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.102046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.102275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.233 [2024-11-26 07:41:42.102306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.233 qpair failed and we were unable to recover it. 00:32:14.233 [2024-11-26 07:41:42.102599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.102628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.102986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.103016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.103405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.103436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.103800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.103831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.104233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.104263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.104509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.104541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.104907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.104937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.105292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.105324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.105698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.105728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.106089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.106119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.106509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.106541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.106755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.107154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.107197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.107595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.107623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.107973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.108002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.108434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.108466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.108847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.108876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.109258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.109289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.109666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.109696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.110061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.110437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.110468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.110777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.111175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.111206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.111601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.111631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.112043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.112072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.112496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.112526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.112868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.112899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.113232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.113264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.113648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.113677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.114035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.114065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.114290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.114322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.114633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.114661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.115013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.115044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.115412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.115444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.115817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.115846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.116231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.116262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.116634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.116664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.117034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.117063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.117432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.117464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.117720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.117749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.118073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.118108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.118391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.118420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.118809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.118839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.119094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.119463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.119494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.119831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.119863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.120218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.120250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.120550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.120578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.120954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.120991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.121341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.121373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.121759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.122123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.122152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.122527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.122557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.122982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.234 [2024-11-26 07:41:42.123011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.234 qpair failed and we were unable to recover it. 00:32:14.234 [2024-11-26 07:41:42.123387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.123418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.123791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.123821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.123973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.235 [2024-11-26 07:41:42.124009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.235 [2024-11-26 07:41:42.124015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.235 [2024-11-26 07:41:42.124020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.235 [2024-11-26 07:41:42.124024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.235 [2024-11-26 07:41:42.124180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.124210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.124597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.124625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.124937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.124966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.125269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.125299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.125641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.125670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.125843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:14.235 [2024-11-26 07:41:42.126078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.125983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:14.235 [2024-11-26 07:41:42.126108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 [2024-11-26 07:41:42.126113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.126113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:14.235 [2024-11-26 07:41:42.126355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.126386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.126770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.126799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.127204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.127236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.127457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.127485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.127822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.127850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.128205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.128246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.128481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.128514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.128897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.128926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.129188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.129219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.129563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.129592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.129925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.129962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.130228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.130517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.130545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.130903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.130934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.131186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.131216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.131561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.131589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.131949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.132336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.132366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.132742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.132771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.133113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.133483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.133513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.133862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.133891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.134254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.134285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.134926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.134955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.135217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.135247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.135586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.135617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.135850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.135878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.136237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.136269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.136513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.136541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.136778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.136811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.137036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.137424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.137456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.137707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.137737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.138073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.138108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.138371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.138401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.138763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.235 [2024-11-26 07:41:42.138793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.235 qpair failed and we were unable to recover it. 00:32:14.235 [2024-11-26 07:41:42.139152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.139194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.139580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.139610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.139979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.140008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.140332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.140363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.140623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.140651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.140778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.140807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.141186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.141507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.141906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.141937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.142291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.142323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.142552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.142580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.142917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.142947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.143302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.143332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.143684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.143712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.143950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.143979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.144332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.144363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.144735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.144766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.144911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.144943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.145234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.145568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.145970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.145999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.146339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.146370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.146598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.146627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.146995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.147024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.147399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.147431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.147651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.147680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.148065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.148094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.148463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.148909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.149266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.149298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.149715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.149746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.149956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.149986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.150337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.150368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.150737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.150767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.151125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.151153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.151403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.151433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.151806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.151836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.152194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.152223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.152593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.152622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.152942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.152972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.153189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.153220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.153558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.153595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.153967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.153998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.154356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.154386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.154747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.154777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.155141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.155182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.155420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.155448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.155764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.155794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.156043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.156073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.156425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.156456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.156794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.156825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.157181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.157211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.157615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.157645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.236 [2024-11-26 07:41:42.157904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.236 [2024-11-26 07:41:42.157933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.236 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.158291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.158322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.158677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.158708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.159065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.159094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.159471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.159501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.159850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.159880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.160087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.160116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.160496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.160528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.160871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.160901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.161315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.161346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.161546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.161574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.161793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.161821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.162178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.162208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.162936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.162965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.163189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.163219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.163560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.163592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.163830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.163859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.164214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.164244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.164367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.164399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.164618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.164648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.164996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.165032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.165401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.165431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.165804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.165832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.166200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.166231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.166582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.166612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.166979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.167337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.167370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.167727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.167758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.168008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.168044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.168398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.168429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.168809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.168841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.169306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.169344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.169709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.169738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.169993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.170024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.170395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.170426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.170795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.170825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.171191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.171221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.171575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.171603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.171839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.171869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.172080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.172108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.172484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.172514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.172870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.172901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.173260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.173293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.173639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.173671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.174035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.174065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.174395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.174427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.174775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.174805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.175071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.175107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.175505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.175536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.176043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.176073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.176327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.176358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.176697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.176728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.237 qpair failed and we were unable to recover it. 00:32:14.237 [2024-11-26 07:41:42.176984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.237 [2024-11-26 07:41:42.177017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.177263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.177294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.177650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.177688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.178021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.178051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.178422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.178453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.178785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.178814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.179169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.179202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.179537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.179565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.179919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.180291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.180321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.180671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.180703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.181058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.181088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.181318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.181349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.181599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.181636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.181863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.181893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.182239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.182268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.182626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.182656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.183028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.183059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.183312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.183343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.183675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.183704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.184065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.184094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.184460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.184492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.184853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.184882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.185264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.185297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.185652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.185681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.185940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.185973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.186320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.186352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.186688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.186724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.187091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.187121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.187385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.187417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.187733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.187764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.188130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.188182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.188517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.188546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.188930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.188958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.189278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.189309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.189520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.189551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.189918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.189948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.190319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.190350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.190729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.190759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.191116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.191145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.191389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.191420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.191784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.191816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.192156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.192203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.192414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.192449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.192818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.192846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.193066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.193095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.193472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.193504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.193866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.193897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.238 [2024-11-26 07:41:42.194102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.238 [2024-11-26 07:41:42.194133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.238 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.194378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.194770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.194799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.195026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.195055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.195279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.195309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.195690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.195720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.196097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.196127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.196379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.196413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.196787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.196817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.197058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.197584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.197614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.197964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.198338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.198369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.198613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.198642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.198983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.199012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.199403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.199434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.199800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.199830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.200048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.200079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.200434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.200465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.200820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.200852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.201218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.201251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.201494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.201864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.201893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.202215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.202244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.202598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.202627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.202982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.203012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.203441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.203472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.203826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.203856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.204190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.204220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.204584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.204613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.205012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.205230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.205263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.205490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.205518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.205860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.206111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.206139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.206548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.206578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.206817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.206847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.207191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.207222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.207546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.207584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.207885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.208106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.208504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.208864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.208895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.209133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.209538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.209569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.209930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.209961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.210317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.210349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.210675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.210708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.211048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.211077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.211447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.211477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.211849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.211880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.212240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.212270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.212649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.239 [2024-11-26 07:41:42.212979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.239 [2024-11-26 07:41:42.213009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.239 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.213338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.213370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.214097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.214126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.214379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.214410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.214767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.214795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.215113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.215141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.215519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.215549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.215768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.215796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.215958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.216336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.216368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.216724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.216753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.216962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.216991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.217210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.217240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.217582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.217612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.217960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.217989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.218249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.218279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.218647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.218681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.218993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.219023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.219274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.219308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.219548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.219577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.219916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.219944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.220266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.220295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.220519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.220548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.220907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.220940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.221297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.221328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.221671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.221996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.222026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.222378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.222410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.222747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.222775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.223094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.223124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.223490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.223521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.223917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.223946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.224302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.224333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.224658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.224687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.224901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.224931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.225280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.225317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.225665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.225697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.226072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.226100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.226470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.226504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.226880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.226910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.227297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.227328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.227540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.227569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.227822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.227852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.228117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.228150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.228397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.228428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.228746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.228775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.229042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.229072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.229439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.229471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.229711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.229740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.230100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.230387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.230420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.230791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.230820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.231171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.231202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.240 qpair failed and we were unable to recover it. 00:32:14.240 [2024-11-26 07:41:42.231453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.240 [2024-11-26 07:41:42.231484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.231834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.231864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.232227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.232258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.232687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.232717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.232919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.232949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.233310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.233342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.233702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.233731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.234107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.234137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.234360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.234391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.234755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.234784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.235117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.235146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.235511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.235540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.235903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.235931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.236185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.236574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.236604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.236862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.237119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.237148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.237533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.237563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.237896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.237924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.238281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.238311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.238658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.238688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.239052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.239082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.239238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.239656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.240037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.240067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.240418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.240448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.240800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.240830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.241186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.241216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.241576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.241605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.241943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.241973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.242315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.242346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.242582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.242611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.242991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.243335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.243365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.243728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.243756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.244126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.244155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.244513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.244542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.244893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.245131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.245172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.245551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.245581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.245916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.245944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.246287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.246318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.246524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.246552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.246903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.246932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.247265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.247701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.247731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.248049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.248078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.248277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.248306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.248657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.249059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.249087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.249418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.249448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.249819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.249850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.250173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.250205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.241 [2024-11-26 07:41:42.250557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.241 [2024-11-26 07:41:42.250585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.241 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.250810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.250838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.251146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.251187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.251526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.251556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.251892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.251920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.252274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.252304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.252529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.252769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.252797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.252996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.253026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.253458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.253487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.253838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.254225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.254256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.254354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.254382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.254695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.254725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.254953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.254982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.255344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.255374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.255711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.255964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.255993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.256251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.256283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.256645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.256673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.256898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.256926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.257304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.257334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.257709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.257737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.258099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.258127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.258519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.258550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.258786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.258815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.259175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.259206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.259529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.259559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.259754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.259782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.260142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.260183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.260403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.260431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.260782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.260810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.261129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.261157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.261370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.261401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.261735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.261764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.262117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.262147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.262510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.262539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.262760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.262788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.263145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.263192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.263427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.263455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.263787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.264141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.264185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.264501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.264530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.264892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.264923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.265324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.265355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.265708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.265737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.265856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.265884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.266314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.242 [2024-11-26 07:41:42.266345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.242 qpair failed and we were unable to recover it. 00:32:14.242 [2024-11-26 07:41:42.266437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.266464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.266665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.266693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.267071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.267099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.267452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.267482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.267836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.267865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.268217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.268248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.268445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.268473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.268817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.268846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.269210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.269241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.269514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.269541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.269905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.269934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.270306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.270336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.270557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.270586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.270807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.270836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.271134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.271181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.271523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.271552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.271769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.271797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.272152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.272194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.272547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.272578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.272790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.272822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.273019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.273049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.273394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.273426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.273761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.273791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.274039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.274067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.274425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.274456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.274821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.274851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.275189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.275219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.275574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.275923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.276307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.276339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.276435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.276463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.276786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.276822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.277059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.277088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.277436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.277468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.277830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.278102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.278133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.278361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.278390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.278768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.278796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.279148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.279554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.279583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.279915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.280126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.280154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.280523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.280556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.280947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.281311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.281342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.281710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.281741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.282115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.282479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.282509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.282835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.283115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.283147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.283303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.283336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.283573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.283602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.283819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.283849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.243 qpair failed and we were unable to recover it. 00:32:14.243 [2024-11-26 07:41:42.284178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.243 [2024-11-26 07:41:42.284209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.284573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.284601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.284942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.284971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.285175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.285205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.285552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.285579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.285782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.285818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.286178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.286209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.286460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.286488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.286838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.286868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.287231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.287261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.287577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.287614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.287822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.287851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.288190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.288456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.288489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.288855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.288884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.289200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.289229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.289603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.289633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.289973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.290002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.290332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.290361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.290565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.290958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.290986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.291325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.291355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.291696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.291726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.292026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.292054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.292418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.292774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.292805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.293187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.293217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.293550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.293578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.293942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.293971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.294295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.294324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.294691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.294721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.295024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.295054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.295260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.295290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.295670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.295700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.296057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.296434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.296463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.296793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.296822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.297185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.297593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.297624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.297977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.298005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.298095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.298123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.298205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5e00 (9): Bad file descriptor 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Write completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 Read completed with error (sct=0, sc=8) 00:32:14.244 starting I/O failed 00:32:14.244 [2024-11-26 07:41:42.299328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:32:14.244 [2024-11-26 07:41:42.299796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.244 [2024-11-26 07:41:42.299853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.244 qpair failed and we were unable to recover it. 00:32:14.244 [2024-11-26 07:41:42.300083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.300115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.300361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.300393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.300697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.300727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.301099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.301127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.301515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.301754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.301784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.301998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.302026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.302278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.302315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.302680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.302711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.303019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.303048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.245 [2024-11-26 07:41:42.303437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.245 [2024-11-26 07:41:42.303468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.245 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.303737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.303769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.304200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.304230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.304555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.304585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.304937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.304968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.305337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.305367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.305727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.305757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.306120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.306149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.306490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.306520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.306868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.307096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.307125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.307487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.307518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.307829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.307859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.308098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.308133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.308481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.308513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.308862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.308891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.309235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.309267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.309594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.309622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.309878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.309911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.310250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.310281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.310651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.310680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.311041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.311069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.520 qpair failed and we were unable to recover it. 00:32:14.520 [2024-11-26 07:41:42.311418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.520 [2024-11-26 07:41:42.311449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.311687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.311717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.312053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.312082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.312403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.312433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.312642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.312670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.313031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.313060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.313406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.313438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.313773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.313802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.314123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.314152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.314517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.314546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.314906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.314935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.315192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.315223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.315458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.315488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.315830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.315860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.316197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.316227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.316589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.316618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.316983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.317012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.317366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.317398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.317605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.317634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.317861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.317889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.318029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.318062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.318431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.318462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.318657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.318685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.318917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.318946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.319307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.319337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.319672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.319703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.320062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.320091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.320453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.320484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.320888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.320918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.321268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.321299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.321557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.321586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.321954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.321990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.322200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.322229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.322471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.322499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.322720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.322749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.322948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.322977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.323282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.323313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.323586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.521 [2024-11-26 07:41:42.323919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.521 [2024-11-26 07:41:42.323948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.521 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.324290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.324320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.324479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.324507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.324778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.324806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.325156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.325198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.325514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.325544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.325855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.325886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.326134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.326176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.326507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.326537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.326892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.326920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.327279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.327310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.327646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.327676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.327991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.328020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.328326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.328355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.328718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.328748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.328963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.328991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.329326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.329355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.329685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.330092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.330121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.330339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.330368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.330604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.330633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.330992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.331022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.331322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.331353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.331684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.331718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.332084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.332113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.332435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.332466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.332820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.332849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.333111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.333139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.333445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.333475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.333836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.333865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.333985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.334013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.334352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.334382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.334757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.334786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.335147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.335205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.335580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.335610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.335964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.335993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.336248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.336279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.336643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.522 [2024-11-26 07:41:42.336673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.522 qpair failed and we were unable to recover it. 00:32:14.522 [2024-11-26 07:41:42.337017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.337047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.337283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.337314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.337668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.337698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.338058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.338087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.338449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.338479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.338848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.338878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.339142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.339181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.339535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.339564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.339912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.339941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.340311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.340343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.340673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.340703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.341049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.341078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.341177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.341207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.341551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.341581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.341936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.341966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.342316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.342345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.342687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.342717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.342912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.342940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.343272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.343303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.343644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.343676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.344017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.344046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.344256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.344286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.344638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.344669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.345003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.345033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.345395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.345425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.345650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.345679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.345876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.345907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.346123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.346152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.346432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.346461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.346654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.346683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.346905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.346933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.347200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.347230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.347435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.347465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.347802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.347831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.348081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.348111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.348469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.348500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.348720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.348752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.348997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.523 [2024-11-26 07:41:42.349027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.523 qpair failed and we were unable to recover it. 00:32:14.523 [2024-11-26 07:41:42.349358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.349390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.349618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.349646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.349975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.350005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.350264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.350620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.350648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.350977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.351006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.351403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.351749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.351778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.352140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.352180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.352543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.352573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.352901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.352930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.353175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.353206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.353576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.353605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.353949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.353978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.354189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.354627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.354657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.355018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.355242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.355273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.355600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.355629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.355945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.355973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.356284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.356314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.356614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.356644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.356988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.357018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.357386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.357416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.357756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.357792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.358020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.358050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.358278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.358309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.358634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.358664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.359045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.359074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.359504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.359536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.359871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.359901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.360271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.360301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.360663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.360693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.524 [2024-11-26 07:41:42.360786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.524 [2024-11-26 07:41:42.360815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.524 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.361127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.361154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.361503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.361533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.361884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.361914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.362274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.362304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.362686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.362716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.363072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.363103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.363342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.363372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.363714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.363743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.364095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.364124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.364481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.364510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.364888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.364916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.365125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.365154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.365506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.365535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.365850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.365880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.366106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.366135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.366488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.366518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.366870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.366900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.367228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.367259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.367643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.367674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.368026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.368055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.368368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.368399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.368745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.368774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.369125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.369154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.369446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.369476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.369729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.369763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.370167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.370510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.370540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.370854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.370884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.371249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.371281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.371675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.371929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.371965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.372324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.372354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.372691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.372721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.372988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.373017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.373383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.373413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.525 qpair failed and we were unable to recover it. 00:32:14.525 [2024-11-26 07:41:42.373769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.525 [2024-11-26 07:41:42.373799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.374024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.374052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.374281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.374310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.374658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.374689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.375035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.375064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.375402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.375432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.375645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.375677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.375993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.376024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.376376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.376407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.376727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.377098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.377128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.377298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.377329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.377628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.377996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.378025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.378380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.378411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.378791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.378820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.379169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.379516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.379546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.379764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.379792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.380138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.380178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.380498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.380527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.380735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.381082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.381113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.381339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.381370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.381700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.381732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.382076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.382106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.382458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.382488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.382711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.382739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.383087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.383115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.383347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.383377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.383718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.383747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.383995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.384027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.384384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.384415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.384798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.384827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.385052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.385080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.385281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.385316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.385534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.385564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.385868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.385896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.526 [2024-11-26 07:41:42.386250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.526 [2024-11-26 07:41:42.386281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.526 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.386643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.386874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.386902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.387126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.387155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.387357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.387387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.387741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.387768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.388119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.388149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.388518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.388548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.388884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.388913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.389123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.389152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.389517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.389547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.389762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.389791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.390128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.390157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.390378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.390408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.390761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.390790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.391130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.391170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.391500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.391529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.391866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.391895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.392248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.392280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.392635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.392664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.393008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.393035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.393397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.393427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.393752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.393781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.394126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.394155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.394392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.394422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.394763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.394793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.395155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.395196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.395539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.395568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.395910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.395938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.396134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.396185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.396547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.396576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.396927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.396956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.397297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.397328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.397682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.397711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.398072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.398100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.398462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.398846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.398875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.399234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.399271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.527 [2024-11-26 07:41:42.399518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.527 [2024-11-26 07:41:42.399548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.527 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.399642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.399670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.400107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.400223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.400599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.400636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.400969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.400999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.401328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.401361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.401587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.401616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.401969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.401997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.402299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.402329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.402708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.402736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.403094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.403123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.403480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.403510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.403851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.403879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.404281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.404629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.404657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.404952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.404981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.405332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.405363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.405590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.405625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.405837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.405865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.406117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.406146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.406497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.406527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.406745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.406778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.407098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.407127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.407357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.407389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.407743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.407772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.408006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.408035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.408447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.408486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.408719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.408747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.409125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.409154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.409522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.409551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.409890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.409919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.410289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.410320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.410678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.410707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.411049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.411077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.411403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.411432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.411761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.411790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.412131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.412170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.412478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.412507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.412839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.412868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.528 qpair failed and we were unable to recover it. 00:32:14.528 [2024-11-26 07:41:42.413212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.528 [2024-11-26 07:41:42.413242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.413508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.413537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.413901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.413929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.414213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.414242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.414471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.414501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.414717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.414745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.414940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.414968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.415289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.415320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.415644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.415673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.416014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.416281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.416512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.416541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.416889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.416918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.417280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.417311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.417642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.417672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.417903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.417933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.418284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.418316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.418634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.419020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.419386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.419416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.419627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.419655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.419997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.420027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.420376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.420405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.420727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.420757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.421102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.421432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.421463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.421656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.421684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.422045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.422076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.422394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.422431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.422644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.422910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.422944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.423256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.423287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.423501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.423531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.423935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.423965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.424312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.424342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.529 qpair failed and we were unable to recover it. 00:32:14.529 [2024-11-26 07:41:42.424694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.529 [2024-11-26 07:41:42.424723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.425066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.425095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.425461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.425491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.425861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.425891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.426228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.426260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.426497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.426526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.426891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.426918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.427275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.427306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.427643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.427673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.427866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.427896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.428124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.428152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.428518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.428548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.428883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.428911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.429108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.429137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.429378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.429408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.429725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.429756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.430053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.430089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.430337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.430369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.430708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.430738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.430976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.431004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.431422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.431764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.431792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.432116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.432146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.432518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.432548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.432903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.432932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.433246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.433277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.433638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.433666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.434013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.434276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.434308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.434513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.434542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.434778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.434808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.435005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.435035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.435260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.435289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.435548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.435576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.435937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.435966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.436324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.436354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.436714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.436743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.437086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.437115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.437502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.530 [2024-11-26 07:41:42.437532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.530 qpair failed and we were unable to recover it. 00:32:14.530 [2024-11-26 07:41:42.437866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.437895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.438238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.438268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.438612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.438640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.438992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.439021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.439376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.439407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.439759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.439788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.439984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.440013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.440355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.440385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.440612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.440644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.440891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.440921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.441134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.441172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.441515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.441544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.441875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.441905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.442262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.442293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.442655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.443015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.443045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.443769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.443798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.444190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.444221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.444567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.444597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.444961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.444990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.445279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.445309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.445669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.445704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.446034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.446064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.446420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.446449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.446769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.446800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.447171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.447202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.447549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.447578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.447941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.447971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.448297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.448327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.448672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.448700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.449052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.449081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.449414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.449444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.449663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.449693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.450033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.450063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.450404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.450434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.450790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.450820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.450933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.450965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.451197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.531 [2024-11-26 07:41:42.451231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.531 qpair failed and we were unable to recover it. 00:32:14.531 [2024-11-26 07:41:42.451615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.451645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.451945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.451974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.452332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.452363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.452725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.452754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.453065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.453093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.453296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.453326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.453589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.453973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.454001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.454335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.454365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.454605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.454633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.454978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.455006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.455371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.455402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.455749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.455780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.456078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.456108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.456349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.456380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.456600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.456628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.457014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.457043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.457380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.457412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.457765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.457794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.458106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.458139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.458511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.458542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.458905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.458935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.459302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.459332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.459576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.459606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.459847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.459876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.460075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.460104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.460452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.460483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.460814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.460844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.461188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.461218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.461597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.461627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.461831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.461860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.462194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.462223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.462456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.462485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.462747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.462776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.463124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.463154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.463355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.463384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.463601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.463628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.463924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.532 [2024-11-26 07:41:42.463954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.532 qpair failed and we were unable to recover it. 00:32:14.532 [2024-11-26 07:41:42.464189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.464220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.464678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.464931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.464964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.465293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.465323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.465669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.465699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.466043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.466072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.466416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.466446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.466794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.466823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.467119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.467149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.467524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.467554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.467894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.468124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.468153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.468514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.468546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.468765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.468802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.469149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.469448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.469477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.469852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.469881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.470237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.470268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.470633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.470662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.471019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.471048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.471278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.471307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.471521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.471551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.471890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.471918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.472225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.472254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.472469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.472775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.472803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.473146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.473195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.473551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.473582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.473925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.473954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.474297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.474327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.474569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.474898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.474927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.475168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.475198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.475553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.475583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.475809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.475837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.476065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.476099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.476427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.476458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.476659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.476687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.477050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.533 [2024-11-26 07:41:42.477078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.533 qpair failed and we were unable to recover it. 00:32:14.533 [2024-11-26 07:41:42.477421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.477452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.477806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.477834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.478156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.478200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.478526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.478555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.478912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.478940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.479295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.479324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.479692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.479722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.480065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.480094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.480444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.480474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.480832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.480860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.481202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.481234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.481463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.481797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.481826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.482191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.482222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.482465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.482493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.482711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.483095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.483124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.483481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.483511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.483872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.483901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.484234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.484264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.484625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.484653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.484978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.485006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.485366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.485396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.485745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.485774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.486151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.486772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.486802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.487191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.487505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.487534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.487882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.487911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.488151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.488190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.488438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.488467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.488766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.488795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.489184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.489215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.534 [2024-11-26 07:41:42.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.534 [2024-11-26 07:41:42.489556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.534 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.489893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.489921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.490119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.490147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.490499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.490529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.490827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.490857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.491233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.491442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.491470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.491813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.492186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.492221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.492435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.492463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.492803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.492832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.493060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.493440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.493822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.493850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.494196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.494227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.494581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.494610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.494952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.494981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.495175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.495205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.495556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.495585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.495774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.495801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.495995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.496023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.496373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.496403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.496730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.496760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.497122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.497150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.497499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.497528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.497884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.497914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.498171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.498205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.498522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.498558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.498750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.498778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.499117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.499147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.499485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.499514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.499702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.499730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.500090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.500118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.500454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.500484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.500828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.500856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.501189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.501219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.501574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.501603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.501940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.501969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.502305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.502335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.535 qpair failed and we were unable to recover it. 00:32:14.535 [2024-11-26 07:41:42.502552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.535 [2024-11-26 07:41:42.502583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.502925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.502954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.503307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.503337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.503692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.503720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.504049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.504079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.504411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.504441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.504646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.504677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.505017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.505046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.505169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.505198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.505566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.505595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.505938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.505974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.506356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.506386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.506713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.506741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.506987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.507019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.507323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.507352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.507555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.507583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.507800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.507830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.508180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.508211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.508405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.508433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.508770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.508798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.509014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.509042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.509343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.509373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.509722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.509751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.510102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.510131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.510442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.510472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.510676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.510703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.510921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.510949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.511270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.511299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.511675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.511704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.512061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.512090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.512440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.512470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.512826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.512855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.513210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.513256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.513478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.513506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.513807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.513836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.514197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.514229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.514567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.514596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.514932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.514967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.515310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.536 [2024-11-26 07:41:42.515339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.536 qpair failed and we were unable to recover it. 00:32:14.536 [2024-11-26 07:41:42.515680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.515708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.515939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.515967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.516309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.516339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.516694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.516723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.516975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.517349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.517381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.517700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.517729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.518034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.518062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.518425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.518765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.518793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.519000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.519029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.519397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.519427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.519759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.519788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.520130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.520385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.520412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.520725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.520753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.521064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.521093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.521458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.521487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.521813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.521842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.522135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.522172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.522370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.522399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.522736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.522765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.522955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.522984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.523323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.523352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.523710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.523738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.524047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.524076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.524443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.524473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.524792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.524821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.525177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.525207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.525453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.525481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.525834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.525863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.526218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.526248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.526602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.526631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.526857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.526885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.527212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.527240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.527580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.527609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.527847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.527875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.528214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.528243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.537 [2024-11-26 07:41:42.528445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.537 [2024-11-26 07:41:42.528473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.537 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.528806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.528840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.529180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.529210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.529436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.529465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.529680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.529710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.530049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.530078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.530301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.530331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.530532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.530561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.530860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.530889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.531247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.531277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.531595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.531625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.531945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.531973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.532328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.532358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.532742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.532770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.533123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.533151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.533524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.533553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.533771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.533803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.534148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.534189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.534553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.534582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.534939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.535314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.535344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.535541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.535845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.535873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.536085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.536112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.536572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.536919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.536947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.537285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.537313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.537609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.537638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.537992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.538027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.538334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.538364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.538707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.538736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.538927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.538956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.539192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.539221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.539424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.539453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.539795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.538 [2024-11-26 07:41:42.539824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.538 qpair failed and we were unable to recover it. 00:32:14.538 [2024-11-26 07:41:42.540013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.540042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.540391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.540420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.540508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.540535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.540758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.540786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.541128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.541156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.541507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.541535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.541728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.541757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.541987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.542017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.542365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.542395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.542723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.542751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.543103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.543132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.543351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.543380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.543719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.543747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.544091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.544453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.544483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.544823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.544852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.545211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.545618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.545852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.545881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.546235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.546264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.546611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.546640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.546993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.547375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.547404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.547758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.547788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.547983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.548013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.548222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.548254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.548660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.548689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.548911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.549263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.549293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.549588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.549618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.549838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.549868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.550186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.550216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.550541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.550570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.550902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.550930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.551185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.551222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.551593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.551622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.551954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.539 [2024-11-26 07:41:42.551982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.539 qpair failed and we were unable to recover it. 00:32:14.539 [2024-11-26 07:41:42.552070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.552097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.552246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.552278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.552621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.552650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.552996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.553025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.553223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.553253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.553600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.553630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.553863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.553891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.554227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.554256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.554571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.554600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.554918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.554947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.555317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.555346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.555711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.555741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.556088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.556117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.556327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.556357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.556718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.556747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.557091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.557121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.557354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.557384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.557736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.557767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.558110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.558138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.558482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.558512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.558854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.558882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.559223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.559253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.559475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.559504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.559785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.559813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.560146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.560188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.560513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.560542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.560893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.560921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.561276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.561312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.561675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.561704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.561924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.561953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.562250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.562280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.562618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.562647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.563278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.563307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.563643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.563672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.564017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.564045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.564413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.564442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.564789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.564816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.565171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.565201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.565542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.565569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.565903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.565932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.566276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.566305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.566553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.566584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.566927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.566956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.567194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.567224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.567534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.567562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.567804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.568135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.568172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.540 [2024-11-26 07:41:42.568371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.540 [2024-11-26 07:41:42.568399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.540 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.568726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.568754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.568954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.569358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.569728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.569757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.569960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.569988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.570181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.570211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.570581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.570610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.570975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.571003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.571312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.571342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.571692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.571721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.571929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.571961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.572049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.572077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.572238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.572267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.572461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.572490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.572688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.572716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.573049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.573078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.573301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.573337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.573531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.573560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.573950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.573978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.574312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.574341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.574660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.574689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.575035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.575064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.575399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.575429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.575624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.575653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.575979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.576008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.576363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.576394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.576598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.576984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.577013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.577301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.577331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.577519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.577547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.577784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.577813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.578031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.578059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.578447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.578477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.578830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.578858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.579205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.579235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.579658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.579687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.580025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.580054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.580144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.580180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.580414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.580443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.580786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.581478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.581507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.581853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.581882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.582095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.582123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.582466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.582497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.582704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.583027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.583056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.583451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.583480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.583817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.583845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.584188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.584219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.584539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.541 [2024-11-26 07:41:42.584568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.541 qpair failed and we were unable to recover it. 00:32:14.541 [2024-11-26 07:41:42.584898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.584926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.585274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.585311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.585626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.585655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.585997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.586025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.586377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.586407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.586772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.586800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.587147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.587186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.587566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.587595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.587902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.587931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.588283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.588312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.588624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.588653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.588989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.589018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.589381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.589764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.589794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.590023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.590051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.590297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.590330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.590683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.590712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.591018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.591047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.591279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.591309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.591655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.591684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.592032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.592060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.592174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.592208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.592548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.592577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.592883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.592912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.593128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.593157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.593375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.593404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.593667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.593695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.594047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.594075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.594297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.594328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.594548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.594576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.594911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.594939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.595129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.595537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.595937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.595972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.542 [2024-11-26 07:41:42.596304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.542 [2024-11-26 07:41:42.596335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.542 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.596672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.596703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.596922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.596950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.597225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.597559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.597589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.597958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.597986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.598227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.598567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.598596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.598834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.598866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.599217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.599247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.599486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.599514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.599717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.599746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.600090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.600119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.600547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.600897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.600924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.601291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.601320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.601547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.601576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.601778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.601806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.602168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.602202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.602580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.602610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.602949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.603284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.603314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.603622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.603651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.603889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.603916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.604138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.604176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.604524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.604552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.604748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.604776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.605039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.605067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.605289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.605322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.605687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.606061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.606090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.606305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.606698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.607074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.607102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.820 [2024-11-26 07:41:42.607306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.820 [2024-11-26 07:41:42.607336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.820 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.607536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.607567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.607923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.607952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.608193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.608225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.608446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.608476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.608682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.608709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.609057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.609093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.609448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.609478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.609774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.609804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.610103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.610131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.610498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.610528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.610736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.610764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.611109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.611138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.611390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.611419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.611778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.611807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.612027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.612055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.612297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.612329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.612636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.612858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.612887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.613229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.613259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.613603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.613633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.613949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.613977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.614311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.614341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.614676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.614703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.615002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.615030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.615443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.615472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.615829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.615857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.616176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.616205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.616590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.616618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.616848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.616876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.617233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.617263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.617588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.617618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.617967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.617995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.618310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.618346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.618699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.618729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.619056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.619085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.619419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.619449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.619795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.619824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.620179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.821 [2024-11-26 07:41:42.620209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.821 qpair failed and we were unable to recover it. 00:32:14.821 [2024-11-26 07:41:42.620426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.620454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.620777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.620806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.621156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.621195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.621570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.621598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.621922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.621951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.622221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.622561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.622591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.622944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.622971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.623322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.623352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.623679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.623707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.624076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.624105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.624332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.624361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.624568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.624596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.624937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.624965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.625310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.625340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.625675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.625703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.626055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.626083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.626441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.626471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.626743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.627076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.627105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.627448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.627478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.627864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.628213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.628243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.628508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.628536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.628878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.628906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.629246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.629275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.629470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.629500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.629874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.629902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.630318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.630349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.630706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.630736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.631042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.631071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.631435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.631466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.631812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.631841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.632190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.632219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.632568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.632597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.632942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.632976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.633262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.633292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.633631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.633659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.822 [2024-11-26 07:41:42.634021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.822 [2024-11-26 07:41:42.634050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.822 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.634345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.634375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.634697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.634726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.635082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.635112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.635447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.635478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.635827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.635856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.636201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.636230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.636469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.636498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.636827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.636855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.637207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.637237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.637595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.637624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.637871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.638226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.638256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.638464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.638492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.638792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.638821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.639173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.639203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.639550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.639578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.639827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.639858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.640051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.640080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.640450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.640481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.640822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.640851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.641197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.641226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.641432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.641461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.641799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.641828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.642189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.642227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.642540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.642569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.642785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.642813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.642941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.642968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.643327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.643357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.643704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.643732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.643944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.643973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.644359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.644388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.644588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.644616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.644873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.644901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.645219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.645248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.645631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.645659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.645861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.645888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.646231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.823 [2024-11-26 07:41:42.646260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.823 qpair failed and we were unable to recover it. 00:32:14.823 [2024-11-26 07:41:42.646601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.646630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.646845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.646874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.647214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.647243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.647591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.647620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.647971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.648001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.648320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.648349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.648704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.648732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.649037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.649066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.649414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.649443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.649784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.649812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.650176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.650207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.650504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.650533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.650890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.650919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.651254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.651631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.651660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.652014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.652042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.652387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.652416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.652768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.652797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.653014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.653042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.653394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.653423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.653614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.653642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.654005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.654034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.654383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.654412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.654761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.654789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.655120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.655148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.655355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.655384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.655702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.655731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.656096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.656129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.656343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.656373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.656676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.656704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.657043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.657072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.657424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.657455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.824 qpair failed and we were unable to recover it. 00:32:14.824 [2024-11-26 07:41:42.657652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.824 [2024-11-26 07:41:42.657681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.657925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.657958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.658212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.658245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.658601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.658630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.658991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.659020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.659323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.659352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.659696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.659725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.660081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.660110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.660427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.660456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.660660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.660689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.661029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.661056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.661253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.661283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.661632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.661662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.662034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.662062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.662420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.662759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.663141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.663176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.663538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.663567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.663875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.663902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.664243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.664272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.664591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.664619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.664811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.664838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.665224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.665260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.665606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.665636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.665886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.665919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.666247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.666278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.666636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.666665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.667008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.667037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.667329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.667358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.667722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.667952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.667980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.668324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.668353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.668654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.668682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.668874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.668903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.669131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.669167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.669376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.669404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.669600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.669630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.669974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.670002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.670374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.825 [2024-11-26 07:41:42.670405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.825 qpair failed and we were unable to recover it. 00:32:14.825 [2024-11-26 07:41:42.670628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.670657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.671015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.671043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.671408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.671437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.671810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.671839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.672205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.672234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.672557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.672586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.672926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.672956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.673286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.673323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.673570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.673602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.673838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.673867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.674211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.674242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.674590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.674621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.674872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.674901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.675237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.675267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.675486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.675514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.675879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.675908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.676262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.676291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.676522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.676550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.676743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.676771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.676998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.677027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.677361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.677391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.677719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.677747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.678104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.678479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.678510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.678813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.678847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.679068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.679099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.679426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.679456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.679757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.679785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.680130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.680167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.680547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.680576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.680787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.680819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.681181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.681212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.681556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.681593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.681924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.681954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.682310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.682681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.682710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.682944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.682974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.683300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.683330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.826 [2024-11-26 07:41:42.683679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.826 [2024-11-26 07:41:42.683710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.826 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.684044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.684073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.684278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.684309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.684615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.684643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.684942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.684970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.685312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.685344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.685708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.685739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.686089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.686118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.686481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.686512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.686730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.686758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.687116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.687144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.687503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.687533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.687919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.687948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.688317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.688348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.688712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.688744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.689102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.689131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.689512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.689542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.689931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.690298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.690328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.690547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.690578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.690944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.690974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.691204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.691238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.691546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.691574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.691836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.692169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.692200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.692498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.692528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.692861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.692890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.693253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.693283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.693605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.693636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.693959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.693988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.694272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.694302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.694532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.694561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.694899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.694928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.695268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.695300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.695664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.695694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.695896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.695924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.696269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.696300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.696622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.696651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.696872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.827 [2024-11-26 07:41:42.696901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.827 qpair failed and we were unable to recover it. 00:32:14.827 [2024-11-26 07:41:42.697281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.697311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.697558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.697589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.697933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.697964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.698216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.698246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.698606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.698635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.698967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.698997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.699361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.699391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.699731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.699760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.700084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.700114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.700500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.700721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.700749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.701021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.701050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.701392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.701422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.701720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.701749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.701966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.701995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.702220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.702255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.702476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.702504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.702862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.702891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.703209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.703240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.703485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.703514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.703883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.704127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.704156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.704511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.704887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.704916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.705261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.705640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.705669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.706040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.706400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.706430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.706681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.706710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.707039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.707068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.707275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.707306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.707657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.707685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.708017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.708046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.708369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.708399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.708749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.708777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.708994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.709023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.709377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.709406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.709633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.709662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.709884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.828 [2024-11-26 07:41:42.709913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.828 qpair failed and we were unable to recover it. 00:32:14.828 [2024-11-26 07:41:42.710273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.710303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.710622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.710650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.711014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.711043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.711252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.711283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.711629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.711660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.711984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.712013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.712383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.712413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.712724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.712753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.713096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.713125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.713489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.713520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.713875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.713905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.714125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.714154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.714407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.714436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.714708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.714737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.715006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.715036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.715257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.715288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.715529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.715557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.715888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.715918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.716274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.716304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.716641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.716671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.716916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.716944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.717287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.717318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.717669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.717698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.718028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.718057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.718272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.718302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.718495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.718747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.718776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.719144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.719181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.719514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.719547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.719864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.719892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.829 [2024-11-26 07:41:42.720244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.829 [2024-11-26 07:41:42.720273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.829 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.720627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.720657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.721006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.721370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.721400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.721752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.721780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.722125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.722153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.722499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.722529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.722753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.722782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.723111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.723140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.723348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.723378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.723730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.723759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.724093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.724122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.724357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.724388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.724626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.724979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.725013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.725376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.725407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.725746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.726131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.726167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.726487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.726516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.726820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.726849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.727072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.727499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.727528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.727852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.727881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.728119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.728151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.728493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.728523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.728780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.728809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.729213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.729243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.729350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.729382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.729622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.729650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.729997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.730026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.730354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.730385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.730729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.730758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.731072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.731101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.731429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.731459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.731676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.731705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.732044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.732073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.732411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.732440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.732784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.732812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.733058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.830 [2024-11-26 07:41:42.733091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.830 qpair failed and we were unable to recover it. 00:32:14.830 [2024-11-26 07:41:42.733315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.733346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.733659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.733688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.734032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.734061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.734272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.734304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.734644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.734673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.735017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.735046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.735287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.735317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.735686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.735715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.736051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.736080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.736389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.736419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.736768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.736798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.737136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.737173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.737496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.737525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.737875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.737903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.738255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.738284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.738647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.738676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.739029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.739065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.739403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.739434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.739774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.739803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.740006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.740034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.740372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.740402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.740735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.741110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.741138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.741490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.741521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.741866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.741895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.742239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.742270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.742506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.742536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.742870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.742899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.743264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.743293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.743646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.743675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.743910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.743938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.744333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.744362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.744556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.744585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.744927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.744956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.745322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.745351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.745549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.745578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.745725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.745756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.746100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.746130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.746479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.831 [2024-11-26 07:41:42.746509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.831 qpair failed and we were unable to recover it. 00:32:14.831 [2024-11-26 07:41:42.746853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.746881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.747222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.747252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.747626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.747655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.747744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.747772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.748324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.748426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.748898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.748935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.749134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.749175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.749610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.749699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.750130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.750188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.750507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.750879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.750909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.751179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.751216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.751575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.751605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.751912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.752410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.752502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.752942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.753453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.753542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.753966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.754004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.754275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.754308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.754652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.754682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.755006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.755037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.755302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.755333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.755686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.755716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.756067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.756096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.756427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.756460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.756802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.756831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.757149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.757192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.757544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.757573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.757886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.757916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.758184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.758219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.758579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.758608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.758964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.758994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.759239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.759268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.759639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.759668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.759972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.760001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.760332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.760363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.760673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.760702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.761067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.832 [2024-11-26 07:41:42.761096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.832 qpair failed and we were unable to recover it. 00:32:14.832 [2024-11-26 07:41:42.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.761461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.761756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.761785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.762100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.762130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.762350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.762380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.762711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.762741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.762932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.762961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.763283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.763320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.763521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.763549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.763865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.763894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.764091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.764120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.764364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.764398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.764783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.764812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.765147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.765198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.765539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.765568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.765907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.765936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.766286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.766318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.766659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.766687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.766845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.766873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.767208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.767238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.767572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.767600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.767952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.767981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.768188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.768219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.768553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.768581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.768799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.768827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.769175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.769206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.769563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.769591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.769814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.769842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.769958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.769987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.770340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.770370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.770710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.770738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.771079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.771108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.771505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.771535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.771881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.771910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.833 qpair failed and we were unable to recover it. 00:32:14.833 [2024-11-26 07:41:42.772270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.833 [2024-11-26 07:41:42.772301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.772638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.772666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.773023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.773052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.773298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.773327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.773684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.773712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.773905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.773934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.774267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.774297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.774630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.774659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.774905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.774934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.775151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.775188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.775532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.775561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.775881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.775910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.776270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.776300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.776531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.776565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.776976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.777005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.777339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.777368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.777723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.777752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.778107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.778135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.778378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.778408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.778765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.778794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.779146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.779184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.779445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.779780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.779809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.780056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.780084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.780455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.780487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.780708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.780736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.781002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.781031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.781372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.781404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.781634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.781663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.782002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.782367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.782397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.782727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.782757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.783100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.783128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.783537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.783567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.783779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.783807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.784196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.784228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.784554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.784583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.784811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.784840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.834 qpair failed and we were unable to recover it. 00:32:14.834 [2024-11-26 07:41:42.785087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.834 [2024-11-26 07:41:42.785115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.785459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.785490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.785692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.785721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.786068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.786096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.786434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.786841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.786871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.787205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.787234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.787571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.787599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.787797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.787825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.788048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.788075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.788422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.788452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.788672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.788700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.789054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.789082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.789330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.789359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.789698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.789727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.789943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.789976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.790322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.790352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.790549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.790577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.790940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.790968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.791288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.791319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.791553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.791582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.791836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.791864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.792009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.792042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.792450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.792481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.792729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.792759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:32:14.835 [2024-11-26 07:41:42.792993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.793022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.835 [2024-11-26 07:41:42.793292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.793321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.835 [2024-11-26 07:41:42.793527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.793558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.835 [2024-11-26 07:41:42.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.793820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.794034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.794062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.794445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.794804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.794833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.795221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.795560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.795588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.795942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.795971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.796270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.796299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.796635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.796666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.796992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.797021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.797433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.797463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.797769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.797798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.797996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.798027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.798393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.798642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.798672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.799023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.799053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.799400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.799431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.799776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.799804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.800148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.800202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.800395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.800424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.800742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.800770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.800989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.801017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.801249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.801279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.801496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.801718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.801746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.802043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.802078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.802333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.802363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.802724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.802754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.802960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.802988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.803236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.803273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.803693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.803721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.804060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.804089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.804458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.804489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.804821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.804850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.805188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.805217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.805423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.805453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.805815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.805843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.806223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.806542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.806571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.806924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.806955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.807321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.807352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.807710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.807740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.808107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.808135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.808479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.808509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.808850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.835 [2024-11-26 07:41:42.808879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.835 qpair failed and we were unable to recover it. 00:32:14.835 [2024-11-26 07:41:42.809100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.809128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.809529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.809561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.809888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.809918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.810277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.810308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.810511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.810539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.810893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.810921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.811265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.811296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.811607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.811638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.811989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.812017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.812401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.812431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.812771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.812801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.813003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.813032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.813362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.813392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.813622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.813651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.814004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.814033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.814385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.814416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.814783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.814811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.815128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.815167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.815518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.815547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.815888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.815917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.816260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.816296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.816492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.816521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.816835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.816864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.817093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.817121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.817484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.817514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.817843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.817878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.818232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.818262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.818466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.818495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.818831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.818860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.819193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.819224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.819583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.819612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.819927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.819956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.820303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.820333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.820695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.820727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.821061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.821092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.821348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.821381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.821593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.821621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.821970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.821999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.822364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.822397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.822691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.822720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.823059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.823088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.823445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.823688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.823717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.824019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.824047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.824387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.824417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.824786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.824815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.825123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.825152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.825280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.825313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.825697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.825726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.826086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.826114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.826523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.826555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.826908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.826938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.827277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.827308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.827643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.827672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.828031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.828061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.828395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.828425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.828675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.828703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.829059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.829088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.829412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.829734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.829764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.830125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.830169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.830509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.830538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.830738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.830766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.831067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.831096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.831413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.831699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.832037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.832066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.832359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.832391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.832593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.832621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.832962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.832991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.833222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.833253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.836 [2024-11-26 07:41:42.833606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.833635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:14.836 [2024-11-26 07:41:42.834009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.834038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.836 [2024-11-26 07:41:42.834388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.834420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.834774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.834803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.834996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.835024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.835373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.835404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.835813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.836151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.836189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.836531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.836898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.836926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.836 [2024-11-26 07:41:42.837130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.836 [2024-11-26 07:41:42.837165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.836 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.837512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.837541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.837733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.837760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.838103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.838132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.838533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.838868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.838896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.839227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.839258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.839483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.839512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.839860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.839889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.840235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.840266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.840608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.840637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.840739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.840767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f45a8000b90 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.841192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.841285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.841732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.841769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.842176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.842208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.842682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.842772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.843129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.843180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.843634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.843724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.844005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.844044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.844244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.844278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.844517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.844545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.844911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.844940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.845263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.845293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.845671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.845699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.846032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.846062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.846370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.846400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.846697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.846726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.847096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.847124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.847465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.847495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.847725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.847754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.848109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.848137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.848504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.848541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.848914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.848943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.849288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.849318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.849662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.849691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.850031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.850059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.850409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.850438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.850669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.850697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.851038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.851067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.851454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.851781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.851809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.852185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.852215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.852534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.852563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.852908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.852936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.853257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.853285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.853620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.853650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.853995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.854025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.854246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.854280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.854514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.854542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.854882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.854911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.855242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.855273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.855574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.855602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.855939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.855968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.856301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.856330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.856645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.856673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.857013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.857041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.857394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.857424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.857761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.857796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.858140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.858184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.858492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.858521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.858754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.858782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.859127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.859155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.859525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.859554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.859929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.860273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.860303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.860653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.860681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.860805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.860839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.861224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.861593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.861622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.861995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.862372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.862402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.862744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.862773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.863114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.863142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.863489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.863518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.863873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.863901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.864251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.864279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.864610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.864638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.864989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.865018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.865241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.865271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.865633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.865662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.866017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.866225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.866259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.866483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.866515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.866835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.866866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.867205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.867236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 Malloc0 00:32:14.837 [2024-11-26 07:41:42.867626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.867656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 [2024-11-26 07:41:42.867985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.868014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.837 [2024-11-26 07:41:42.868238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.868274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.837 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:32:14.837 [2024-11-26 07:41:42.868639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.837 [2024-11-26 07:41:42.868668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.837 qpair failed and we were unable to recover it. 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.838 [2024-11-26 07:41:42.868911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.868940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.838 [2024-11-26 07:41:42.869255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.869285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.869589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.869618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.869962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.869991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.870310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.870340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.870534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.870563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.870876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.870906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.871116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.871145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.871489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.871524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.871859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.871889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.872133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.872169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.872361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.872389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.872744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.872774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.873096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.873127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.873490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.873520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.873849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.873877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.874223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.874253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.874655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.874685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.874942] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.838 [2024-11-26 07:41:42.875029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.875058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.875467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.875497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.875717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.875745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.876076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.876104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.876447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.876477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.876823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.876851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.877219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.877250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.877573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.877601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.877890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.877918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.878249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.878279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.878510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.878542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.878918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.878947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.879287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.879317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.879663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.879690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.880027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.880055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.880399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.880429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.880776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.880804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.881111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.881146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.881510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.881539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.881753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.881782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.881904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.881931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.882127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.882155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.882495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.882524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.882841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.882871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.883103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.883136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.883544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.883574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.883867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.883896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.838 [2024-11-26 07:41:42.884205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.884234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:14.838 [2024-11-26 07:41:42.884585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.884613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.838 [2024-11-26 07:41:42.884948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:14.838 [2024-11-26 07:41:42.884984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.885213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.885244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.885579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.885607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.885980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.886009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.886230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.886259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.886577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.886606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.886845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.886873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.887180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.887209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.887570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.887598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.887809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.887838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.888220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.888565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.888593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.888788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.888816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.889183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.889230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.889564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.889593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.889932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.889961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.890292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.890322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.890534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.890562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.890785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.890816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.891180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.891210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.891526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.891554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.891747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.891781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.892088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.892116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.892340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.892369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.892707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.892735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.892948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.892976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.893685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.893727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.893936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.893965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.894311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.894340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.894541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.894570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:14.838 [2024-11-26 07:41:42.894799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:14.838 [2024-11-26 07:41:42.894827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:14.838 qpair failed and we were unable to recover it. 00:32:15.101 [2024-11-26 07:41:42.895175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.101 [2024-11-26 07:41:42.895206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.101 qpair failed and we were unable to recover it. 00:32:15.101 [2024-11-26 07:41:42.895456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.101 [2024-11-26 07:41:42.895484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.101 qpair failed and we were unable to recover it. 00:32:15.101 [2024-11-26 07:41:42.895853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.101 [2024-11-26 07:41:42.895881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.101 qpair failed and we were unable to recover it. 00:32:15.101 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.101 [2024-11-26 07:41:42.896232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.896261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:15.102 [2024-11-26 07:41:42.896506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.896534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.102 [2024-11-26 07:41:42.896891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.896920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.102 [2024-11-26 07:41:42.897273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.897303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.897691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.897720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.898075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.898104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.898348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.898377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.898618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.898648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.898992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.899021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.899326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.899355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.899678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.899707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.899936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.899965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.900287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.900317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.900645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.900673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.900879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.900907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.901219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.901249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.901463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.901495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.901694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.901724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.902077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.902106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.902448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.902479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.902820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.902849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.903064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.903091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.903435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.903464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.903820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.903849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.904209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.904239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.904606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.904635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.904993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.905021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.905381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.905411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.905752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.905781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.906111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.906139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.906347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.906376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.906709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.907041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.907069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.907283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.907313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 [2024-11-26 07:41:42.907694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.102 [2024-11-26 07:41:42.907723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.102 qpair failed and we were unable to recover it. 00:32:15.102 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.103 [2024-11-26 07:41:42.908069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.908098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.103 [2024-11-26 07:41:42.908466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.908495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.103 [2024-11-26 07:41:42.908834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.908862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.103 [2024-11-26 07:41:42.909229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.909259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.909651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.909983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.910012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.910351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.910381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.910731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.910760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.910984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.911015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.911326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.911357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.911682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.911710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.912050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.912079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.912307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.912337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.912700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.912728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.913058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.913086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.913465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.913495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.913837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.913865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.914212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.914241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.914557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.914586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.914932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:15.103 [2024-11-26 07:41:42.914960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab00c0 with addr=10.0.0.2, port=4420 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.915175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:15.103 [2024-11-26 07:41:42.925851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.103 [2024-11-26 07:41:42.926014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.103 [2024-11-26 07:41:42.926061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.103 [2024-11-26 07:41:42.926083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.103 [2024-11-26 07:41:42.926102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.103 [2024-11-26 07:41:42.926152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.103 07:41:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1649785 00:32:15.103 [2024-11-26 07:41:42.935789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.103 [2024-11-26 07:41:42.935871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.103 [2024-11-26 07:41:42.935900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.103 [2024-11-26 07:41:42.935916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.103 [2024-11-26 07:41:42.935931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.103 [2024-11-26 07:41:42.935960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.945772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.103 [2024-11-26 07:41:42.945833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.103 [2024-11-26 07:41:42.945852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.103 [2024-11-26 07:41:42.945862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.103 [2024-11-26 07:41:42.945871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.103 [2024-11-26 07:41:42.945890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.103 qpair failed and we were unable to recover it. 00:32:15.103 [2024-11-26 07:41:42.955710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.103 [2024-11-26 07:41:42.955763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:42.955776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:42.955783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:42.955790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:42.955808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:42.965771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:42.965828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:42.965841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:42.965849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:42.965856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:42.965871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:42.975758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:42.975809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:42.975822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:42.975830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:42.975837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:42.975851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:42.985763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:42.985814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:42.985827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:42.985834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:42.985841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:42.985854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:42.995778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:42.995827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:42.995840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:42.995847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:42.995853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:42.995867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.005841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.005901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.005927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.005935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.005942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.005961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.015872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.015929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.015954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.015963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.015970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.015989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.025883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.025940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.025965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.025974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.025981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.026000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.035853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.035899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.035914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.035922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.035928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.035942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.045915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.045972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.045986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.045997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.046004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.046017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.055937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.055996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.056022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.056030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.056037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.056056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.065975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.066026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.066041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.066048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.066055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.066069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.104 [2024-11-26 07:41:43.075927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.104 [2024-11-26 07:41:43.075971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.104 [2024-11-26 07:41:43.075985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.104 [2024-11-26 07:41:43.075992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.104 [2024-11-26 07:41:43.075999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.104 [2024-11-26 07:41:43.076012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.104 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.086045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.086093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.086107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.086114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.086120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.086138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.096038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.096091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.096104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.096111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.096118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.096132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.106122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.106196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.106210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.106217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.106223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.106237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.116069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.116117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.116130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.116137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.116143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.116157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.126141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.126193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.126206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.126213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.126219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.126234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.136260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.136317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.136331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.136338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.136344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.136358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.146251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.146299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.146312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.146319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.146325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.146338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.156180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.156227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.156241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.156248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.156254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.156268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.166310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.166404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.166418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.166425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.166431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.166445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.176249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.176294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.176307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.176318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.176324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.176338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.105 [2024-11-26 07:41:43.186298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.105 [2024-11-26 07:41:43.186345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.105 [2024-11-26 07:41:43.186358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.105 [2024-11-26 07:41:43.186365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.105 [2024-11-26 07:41:43.186372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.105 [2024-11-26 07:41:43.186386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.105 qpair failed and we were unable to recover it. 00:32:15.367 [2024-11-26 07:41:43.196291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.367 [2024-11-26 07:41:43.196349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.367 [2024-11-26 07:41:43.196362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.367 [2024-11-26 07:41:43.196369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.367 [2024-11-26 07:41:43.196376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.367 [2024-11-26 07:41:43.196389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.367 qpair failed and we were unable to recover it. 00:32:15.367 [2024-11-26 07:41:43.206367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.367 [2024-11-26 07:41:43.206419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.367 [2024-11-26 07:41:43.206432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.367 [2024-11-26 07:41:43.206439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.367 [2024-11-26 07:41:43.206446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.367 [2024-11-26 07:41:43.206459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.367 qpair failed and we were unable to recover it. 00:32:15.367 [2024-11-26 07:41:43.216366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.367 [2024-11-26 07:41:43.216419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.367 [2024-11-26 07:41:43.216434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.216441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.216448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.216465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.226404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.226454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.226468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.226475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.226482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.226496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.236425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.236507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.236520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.236527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.236534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.236548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.246436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.246485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.246498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.246505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.246511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.246524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.256487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.256539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.256552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.256559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.256565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.256579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.266487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.266537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.266550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.266557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.266563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.266576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.276500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.276547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.276560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.276567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.276573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.276586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.286586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.286637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.286650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.286657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.286663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.286677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.296594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.296650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.296663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.296670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.296676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.296689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.306622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.306696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.306709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.306719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.306726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.306739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.316601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.316649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.316662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.316668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.316675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.316688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.326686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.326739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.326753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.326759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.326765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.326779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.336720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.336804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.336818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.336825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.368 [2024-11-26 07:41:43.336832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.368 [2024-11-26 07:41:43.336846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.368 qpair failed and we were unable to recover it. 00:32:15.368 [2024-11-26 07:41:43.346746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.368 [2024-11-26 07:41:43.346800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.368 [2024-11-26 07:41:43.346814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.368 [2024-11-26 07:41:43.346821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.346827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.346844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.356712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.356802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.356816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.356823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.356829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.356842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.366798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.366896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.366909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.366916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.366923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.366936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.376789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.376844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.376869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.376878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.376884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.376903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.386840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.386892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.386907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.386914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.386921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.386935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.396827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.396883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.396908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.396917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.396923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.396942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.406900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.406963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.406988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.406997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.407003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.407022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.416931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.416980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.416995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.417003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.417009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.417024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.426943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.426990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.427005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.427012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.427019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.427033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.436927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.436981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.436995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.437007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.437015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.437029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.447008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.447059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.447073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.447080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.447086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.447100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.369 [2024-11-26 07:41:43.457027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.369 [2024-11-26 07:41:43.457075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.369 [2024-11-26 07:41:43.457089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.369 [2024-11-26 07:41:43.457096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.369 [2024-11-26 07:41:43.457103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.369 [2024-11-26 07:41:43.457117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.369 qpair failed and we were unable to recover it. 00:32:15.631 [2024-11-26 07:41:43.467032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.631 [2024-11-26 07:41:43.467083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.631 [2024-11-26 07:41:43.467096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.631 [2024-11-26 07:41:43.467103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.631 [2024-11-26 07:41:43.467110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.631 [2024-11-26 07:41:43.467124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.631 qpair failed and we were unable to recover it. 00:32:15.631 [2024-11-26 07:41:43.477027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.631 [2024-11-26 07:41:43.477088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.631 [2024-11-26 07:41:43.477102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.631 [2024-11-26 07:41:43.477109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.631 [2024-11-26 07:41:43.477116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.631 [2024-11-26 07:41:43.477133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.631 qpair failed and we were unable to recover it. 00:32:15.631 [2024-11-26 07:41:43.487105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.631 [2024-11-26 07:41:43.487152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.487169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.487176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.487183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.487196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.497127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.497178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.497192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.497199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.497205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.497218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.507137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.507193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.507207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.507214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.507220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.507234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.517175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.517227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.517240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.517247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.517253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.517266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.527223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.527323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.527337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.527344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.527351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.527365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.537219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.537267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.537281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.537288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.537294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.537307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.547258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.547313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.547327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.547333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.547340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.547353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.557252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.557300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.557314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.557321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.557327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.557340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.567327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.567377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.567393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.567403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.567410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.567428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.577343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.577389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.577403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.577411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.577417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.577431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.587391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.587437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.587451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.587457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.587464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.587477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.597254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.597308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.597322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.597328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.597335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.597348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.607448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.632 [2024-11-26 07:41:43.607497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.632 [2024-11-26 07:41:43.607510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.632 [2024-11-26 07:41:43.607517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.632 [2024-11-26 07:41:43.607523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.632 [2024-11-26 07:41:43.607545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.632 qpair failed and we were unable to recover it. 00:32:15.632 [2024-11-26 07:41:43.617462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.617512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.617526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.617533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.617539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.617553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.627505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.627556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.627571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.627578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.627584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.627598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.637484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.637540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.637554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.637561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.637568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.637582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.647552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.647612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.647627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.647634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.647642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.647657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.657618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.657672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.657686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.657693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.657699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.657712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.667606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.667655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.667668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.667676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.667682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.667696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.677593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.677647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.677661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.677668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.677674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.677687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.687671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.687724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.687737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.687745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.687751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.687764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.697700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.697770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.697784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.697794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.697801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.697814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.707706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.707764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.707777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.707784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.707791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.707804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.633 [2024-11-26 07:41:43.717652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.633 [2024-11-26 07:41:43.717704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.633 [2024-11-26 07:41:43.717717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.633 [2024-11-26 07:41:43.717724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.633 [2024-11-26 07:41:43.717731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.633 [2024-11-26 07:41:43.717744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.633 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.727730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.727784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.727797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.727805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.727811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.727825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.737785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.737840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.737853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.737862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.737868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.737882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.747780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.747845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.747860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.747867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.747873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.747886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.757780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.757830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.757843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.757850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.757856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.757870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.767877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.767924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.767937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.767944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.767950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.767963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.777889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.777978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.778003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.778012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.778019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.778038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.787919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.787975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.787990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.787998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.788004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.788019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.797909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.797963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.797977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.797984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.797991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.798005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.807971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.808027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.808043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.808050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.808056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.808071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.817993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.895 [2024-11-26 07:41:43.818095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.895 [2024-11-26 07:41:43.818109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.895 [2024-11-26 07:41:43.818116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.895 [2024-11-26 07:41:43.818122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.895 [2024-11-26 07:41:43.818137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.895 qpair failed and we were unable to recover it. 00:32:15.895 [2024-11-26 07:41:43.828035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.828084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.828098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.828109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.828116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.828129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.838010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.838058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.838073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.838080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.838086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.838100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.847971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.848027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.848042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.848049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.848055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.848069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.858112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.858166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.858180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.858187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.858194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.858208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.868009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.868058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.868071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.868078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.868085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.868098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.878106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.878153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.878172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.878179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.878185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.878199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.888196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.888250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.888263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.888270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.888277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.888290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.898207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.898256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.898270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.898277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.898283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.898298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.908245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.908296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.908310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.908317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.908323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.908337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.918202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.918254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.918267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.918274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.918281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.918294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.928311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.928362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.928375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.928382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.928388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.928402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.938295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.938349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.938362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.938369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.938375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.896 [2024-11-26 07:41:43.938389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.896 qpair failed and we were unable to recover it. 00:32:15.896 [2024-11-26 07:41:43.948349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.896 [2024-11-26 07:41:43.948412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.896 [2024-11-26 07:41:43.948425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.896 [2024-11-26 07:41:43.948432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.896 [2024-11-26 07:41:43.948438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.897 [2024-11-26 07:41:43.948452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.897 qpair failed and we were unable to recover it. 00:32:15.897 [2024-11-26 07:41:43.958324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.897 [2024-11-26 07:41:43.958379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.897 [2024-11-26 07:41:43.958392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.897 [2024-11-26 07:41:43.958403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.897 [2024-11-26 07:41:43.958409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.897 [2024-11-26 07:41:43.958423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.897 qpair failed and we were unable to recover it. 00:32:15.897 [2024-11-26 07:41:43.968364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.897 [2024-11-26 07:41:43.968414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.897 [2024-11-26 07:41:43.968428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.897 [2024-11-26 07:41:43.968435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.897 [2024-11-26 07:41:43.968441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.897 [2024-11-26 07:41:43.968454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.897 qpair failed and we were unable to recover it. 00:32:15.897 [2024-11-26 07:41:43.978422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:15.897 [2024-11-26 07:41:43.978475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:15.897 [2024-11-26 07:41:43.978488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:15.897 [2024-11-26 07:41:43.978495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:15.897 [2024-11-26 07:41:43.978501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:15.897 [2024-11-26 07:41:43.978515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:15.897 qpair failed and we were unable to recover it. 00:32:16.158 [2024-11-26 07:41:43.988452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.158 [2024-11-26 07:41:43.988501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.158 [2024-11-26 07:41:43.988514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.158 [2024-11-26 07:41:43.988522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.158 [2024-11-26 07:41:43.988528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.158 [2024-11-26 07:41:43.988541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.158 qpair failed and we were unable to recover it. 00:32:16.158 [2024-11-26 07:41:43.998436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.158 [2024-11-26 07:41:43.998483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.158 [2024-11-26 07:41:43.998496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.158 [2024-11-26 07:41:43.998503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.158 [2024-11-26 07:41:43.998509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.158 [2024-11-26 07:41:43.998522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.158 qpair failed and we were unable to recover it. 00:32:16.158 [2024-11-26 07:41:44.008518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.158 [2024-11-26 07:41:44.008565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.158 [2024-11-26 07:41:44.008579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.158 [2024-11-26 07:41:44.008586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.158 [2024-11-26 07:41:44.008592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.158 [2024-11-26 07:41:44.008605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.158 qpair failed and we were unable to recover it. 00:32:16.158 [2024-11-26 07:41:44.018532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.158 [2024-11-26 07:41:44.018579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.158 [2024-11-26 07:41:44.018592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.158 [2024-11-26 07:41:44.018599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.158 [2024-11-26 07:41:44.018605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.158 [2024-11-26 07:41:44.018619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.158 qpair failed and we were unable to recover it. 00:32:16.158 [2024-11-26 07:41:44.028553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.028601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.028614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.028621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.028627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.028641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.038549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.038595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.038608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.038615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.038621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.038635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.048601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.048657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.048672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.048679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.048685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.048699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.058613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.058662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.058675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.058682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.058689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.058702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.068654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.068701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.068715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.068722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.068728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.068741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.078639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.078691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.078704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.078711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.078717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.078731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.088691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.088740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.088753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.088763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.088770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.088783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.098730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.098791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.098804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.098811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.098817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.098831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.108726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.108774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.159 [2024-11-26 07:41:44.108787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.159 [2024-11-26 07:41:44.108794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.159 [2024-11-26 07:41:44.108800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.159 [2024-11-26 07:41:44.108814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.159 qpair failed and we were unable to recover it. 00:32:16.159 [2024-11-26 07:41:44.118638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.159 [2024-11-26 07:41:44.118688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.118701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.118708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.118715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.118728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.128814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.128864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.128878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.128885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.128891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.128905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.138838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.138891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.138916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.138925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.138932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.138950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.148884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.148933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.148948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.148955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.148961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.148976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.158873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.158931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.158956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.158965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.158972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.158991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.168837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.168893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.168909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.168916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.168922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.168937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.178950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.179006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.179021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.179028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.179034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.179048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.188967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.189014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.189028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.189036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.189043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.189057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.198959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.199007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.199020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.199027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.160 [2024-11-26 07:41:44.199033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.160 [2024-11-26 07:41:44.199047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.160 qpair failed and we were unable to recover it. 00:32:16.160 [2024-11-26 07:41:44.208985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.160 [2024-11-26 07:41:44.209047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.160 [2024-11-26 07:41:44.209062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.160 [2024-11-26 07:41:44.209069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.161 [2024-11-26 07:41:44.209077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.161 [2024-11-26 07:41:44.209096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.161 qpair failed and we were unable to recover it. 00:32:16.161 [2024-11-26 07:41:44.219069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.161 [2024-11-26 07:41:44.219120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.161 [2024-11-26 07:41:44.219135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.161 [2024-11-26 07:41:44.219146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.161 [2024-11-26 07:41:44.219153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.161 [2024-11-26 07:41:44.219173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.161 qpair failed and we were unable to recover it. 00:32:16.161 [2024-11-26 07:41:44.229104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.161 [2024-11-26 07:41:44.229156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.161 [2024-11-26 07:41:44.229173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.161 [2024-11-26 07:41:44.229180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.161 [2024-11-26 07:41:44.229186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.161 [2024-11-26 07:41:44.229200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.161 qpair failed and we were unable to recover it. 00:32:16.161 [2024-11-26 07:41:44.239095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.161 [2024-11-26 07:41:44.239155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.161 [2024-11-26 07:41:44.239174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.161 [2024-11-26 07:41:44.239181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.161 [2024-11-26 07:41:44.239187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.161 [2024-11-26 07:41:44.239201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.161 qpair failed and we were unable to recover it. 00:32:16.161 [2024-11-26 07:41:44.249134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.161 [2024-11-26 07:41:44.249194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.161 [2024-11-26 07:41:44.249208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.161 [2024-11-26 07:41:44.249215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.161 [2024-11-26 07:41:44.249221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.161 [2024-11-26 07:41:44.249235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.161 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.259148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.259197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.259212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.259219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.259225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.259240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.269083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.269139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.269154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.269166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.269172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.269187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.279178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.279225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.279239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.279246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.279253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.279267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.289152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.289208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.289222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.289229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.289235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.289249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.299285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.299334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.299347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.299354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.299360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.299373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.309302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.309357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.309371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.309378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.309384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.309398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.319306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.319352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.319365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.319373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.319379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.319392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.329373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.329475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.329488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.329495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.329502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.329515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.339398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.339443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.339457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.339463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.339470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.339483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.349401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.349450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.349463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.349474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.349480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.349494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.359440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.359489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.422 [2024-11-26 07:41:44.359502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.422 [2024-11-26 07:41:44.359509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.422 [2024-11-26 07:41:44.359516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.422 [2024-11-26 07:41:44.359529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.422 qpair failed and we were unable to recover it. 00:32:16.422 [2024-11-26 07:41:44.369460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.422 [2024-11-26 07:41:44.369509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.369522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.369528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.369535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.369548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.379497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.379544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.379557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.379565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.379571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.379584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.389545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.389592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.389605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.389612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.389619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.389632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.399395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.399441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.399455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.399462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.399468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.399483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.409579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.409641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.409655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.409662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.409668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.409682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.419575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.419627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.419640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.419647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.419653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.419667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.429624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.429711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.429724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.429731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.429738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.429751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.439621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.439685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.439699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.439707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.439714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.439727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.449725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.449776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.449789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.449796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.449802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.449815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.459717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.459769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.459783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.459790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.459796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.459810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.469732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.469776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.469789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.469796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.469803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.469816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.479688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.479739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.479752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.479762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.479769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.479782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.489799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.489857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.423 [2024-11-26 07:41:44.489870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.423 [2024-11-26 07:41:44.489877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.423 [2024-11-26 07:41:44.489883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.423 [2024-11-26 07:41:44.489897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.423 qpair failed and we were unable to recover it. 00:32:16.423 [2024-11-26 07:41:44.499798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.423 [2024-11-26 07:41:44.499845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.424 [2024-11-26 07:41:44.499858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.424 [2024-11-26 07:41:44.499865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.424 [2024-11-26 07:41:44.499871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.424 [2024-11-26 07:41:44.499884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.424 qpair failed and we were unable to recover it. 00:32:16.424 [2024-11-26 07:41:44.509836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.424 [2024-11-26 07:41:44.509887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.424 [2024-11-26 07:41:44.509901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.424 [2024-11-26 07:41:44.509908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.424 [2024-11-26 07:41:44.509914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.424 [2024-11-26 07:41:44.509928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.424 qpair failed and we were unable to recover it. 00:32:16.684 [2024-11-26 07:41:44.519827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.684 [2024-11-26 07:41:44.519877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.684 [2024-11-26 07:41:44.519890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.684 [2024-11-26 07:41:44.519897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.684 [2024-11-26 07:41:44.519904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.684 [2024-11-26 07:41:44.519917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.684 qpair failed and we were unable to recover it. 00:32:16.684 [2024-11-26 07:41:44.529902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.684 [2024-11-26 07:41:44.529960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.684 [2024-11-26 07:41:44.529975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.684 [2024-11-26 07:41:44.529983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.529989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.530006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.539924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.539976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.539990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.539998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.540004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.540018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.549942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.549996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.550009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.550017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.550023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.550037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.559825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.559870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.559883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.559891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.559897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.559910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.570020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.570076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.570089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.570096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.570102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.570116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.580026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.580080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.580094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.580101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.580107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.580121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.590043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.590098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.590112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.590119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.590125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.590138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.600061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.600107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.600120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.600127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.600133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.600147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.610119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.610177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.610190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.610204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.610211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.610225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.620144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.620254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.620268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.620275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.620282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.620295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.685 [2024-11-26 07:41:44.630174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.685 [2024-11-26 07:41:44.630223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.685 [2024-11-26 07:41:44.630236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.685 [2024-11-26 07:41:44.630243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.685 [2024-11-26 07:41:44.630249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.685 [2024-11-26 07:41:44.630263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.685 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.640151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.640224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.640237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.640244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.640251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.640265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.650433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.650488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.650500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.650507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.650514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.650527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.660238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.660289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.660302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.660309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.660315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.660329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.670280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.670331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.670347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.670355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.670361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.670376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.680183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.680230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.680245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.680252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.680258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.680273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.690337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.690394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.690408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.690416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.690423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.690437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.700340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.700396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.700409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.700416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.700422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.700436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.710379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.710425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.710439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.710446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.710453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.710466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.720376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.720419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.720432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.720439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.720446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.720459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.730440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.730491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.730504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.730511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.686 [2024-11-26 07:41:44.730517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.686 [2024-11-26 07:41:44.730531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.686 qpair failed and we were unable to recover it. 00:32:16.686 [2024-11-26 07:41:44.740363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.686 [2024-11-26 07:41:44.740466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.686 [2024-11-26 07:41:44.740480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.686 [2024-11-26 07:41:44.740490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.687 [2024-11-26 07:41:44.740496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.687 [2024-11-26 07:41:44.740510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.687 qpair failed and we were unable to recover it. 00:32:16.687 [2024-11-26 07:41:44.750490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.687 [2024-11-26 07:41:44.750537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.687 [2024-11-26 07:41:44.750549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.687 [2024-11-26 07:41:44.750556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.687 [2024-11-26 07:41:44.750563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.687 [2024-11-26 07:41:44.750576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.687 qpair failed and we were unable to recover it. 00:32:16.687 [2024-11-26 07:41:44.760401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.687 [2024-11-26 07:41:44.760449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.687 [2024-11-26 07:41:44.760462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.687 [2024-11-26 07:41:44.760469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.687 [2024-11-26 07:41:44.760475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.687 [2024-11-26 07:41:44.760488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.687 qpair failed and we were unable to recover it. 00:32:16.687 [2024-11-26 07:41:44.770545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.687 [2024-11-26 07:41:44.770595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.687 [2024-11-26 07:41:44.770608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.687 [2024-11-26 07:41:44.770615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.687 [2024-11-26 07:41:44.770621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.687 [2024-11-26 07:41:44.770635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.687 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.780568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.780613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.780626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.780633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.780639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.780653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.790593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.790641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.790654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.790661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.790667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.790681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.800596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.800644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.800659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.800666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.800673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.800687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.810641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.810706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.810719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.810726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.810733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.810747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.820651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.820700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.820713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.820719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.820726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.820740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.830715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.830764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.830780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.830787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.830793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.830807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.840677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.840721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.840734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.840741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.840747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.840761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.850722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.850771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.850783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.850790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.850796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.850810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.947 qpair failed and we were unable to recover it. 00:32:16.947 [2024-11-26 07:41:44.860749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.947 [2024-11-26 07:41:44.860802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.947 [2024-11-26 07:41:44.860815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.947 [2024-11-26 07:41:44.860821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.947 [2024-11-26 07:41:44.860827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.947 [2024-11-26 07:41:44.860841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.870807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.870862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.870876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.870886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.870892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.870906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.880801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.880869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.880886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.880893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.880899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.880914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.890878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.890933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.890946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.890953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.890959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.890973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.900895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.900987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.901000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.901007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.901013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.901027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.910926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.910972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.910986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.910993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.910999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.911013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.920900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.920944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.920957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.920964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.920970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.920983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.930989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.931034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.931048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.931055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.931061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.931074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.941001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.941083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.941097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.941104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.941110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.941124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.951028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.951080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.951093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.951100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.951106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.951120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.961039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.961086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.961103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.961109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.961116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.961129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.971077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.971127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.971140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.971147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.971153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.948 [2024-11-26 07:41:44.971171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.948 qpair failed and we were unable to recover it. 00:32:16.948 [2024-11-26 07:41:44.981085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.948 [2024-11-26 07:41:44.981137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.948 [2024-11-26 07:41:44.981150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.948 [2024-11-26 07:41:44.981163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.948 [2024-11-26 07:41:44.981169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:44.981183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:16.949 [2024-11-26 07:41:44.991145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.949 [2024-11-26 07:41:44.991200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.949 [2024-11-26 07:41:44.991214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.949 [2024-11-26 07:41:44.991221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.949 [2024-11-26 07:41:44.991227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:44.991241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:16.949 [2024-11-26 07:41:45.001140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.949 [2024-11-26 07:41:45.001202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.949 [2024-11-26 07:41:45.001216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.949 [2024-11-26 07:41:45.001227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.949 [2024-11-26 07:41:45.001233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:45.001247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:16.949 [2024-11-26 07:41:45.011205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.949 [2024-11-26 07:41:45.011262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.949 [2024-11-26 07:41:45.011275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.949 [2024-11-26 07:41:45.011282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.949 [2024-11-26 07:41:45.011288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:45.011302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:16.949 [2024-11-26 07:41:45.021214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.949 [2024-11-26 07:41:45.021261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.949 [2024-11-26 07:41:45.021274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.949 [2024-11-26 07:41:45.021281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.949 [2024-11-26 07:41:45.021287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:45.021301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:16.949 [2024-11-26 07:41:45.031240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:16.949 [2024-11-26 07:41:45.031287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:16.949 [2024-11-26 07:41:45.031301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:16.949 [2024-11-26 07:41:45.031308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:16.949 [2024-11-26 07:41:45.031314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:16.949 [2024-11-26 07:41:45.031328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:16.949 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.041172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.041221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.041235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.041242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.041248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.041262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.051302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.051359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.051373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.051380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.051386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.051399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.061343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.061393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.061406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.061413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.061419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.061433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.071420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.071488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.071501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.071508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.071514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.071527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.081356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.081399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.081413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.081420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.081426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.081440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.091441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.091496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.091514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.091521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.091527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.091541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.101444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.101494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.101508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.101515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.101522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.101538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.111457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.111500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.111514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.111521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.111527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.111541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.121462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.121511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.121525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.121532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.121538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.121551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.131529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.131581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.131595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.131606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.131612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.211 [2024-11-26 07:41:45.131629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.211 qpair failed and we were unable to recover it. 00:32:17.211 [2024-11-26 07:41:45.141623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.211 [2024-11-26 07:41:45.141681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.211 [2024-11-26 07:41:45.141695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.211 [2024-11-26 07:41:45.141702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.211 [2024-11-26 07:41:45.141709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.141722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.151608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.151658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.151671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.151678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.151684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.151698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.161591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.161687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.161700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.161707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.161713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.161726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.171679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.171731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.171744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.171751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.171757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.171771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.181630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.181676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.181689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.181696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.181702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.181716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.191691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.191767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.191780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.191787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.191793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.191807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.201641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.201691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.201704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.201711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.201718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.201731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.211756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.211813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.211827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.211834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.211840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.211853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.221772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.221829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.221845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.221852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.221858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.221872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.231820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.231875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.231888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.231895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.231902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.231915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.241784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.241835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.241860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.241869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.241876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.241895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.251860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.251914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.251939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.251948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.251954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.251974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.261861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.261912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.261927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.261939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.261946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.261961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.212 qpair failed and we were unable to recover it. 00:32:17.212 [2024-11-26 07:41:45.271905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.212 [2024-11-26 07:41:45.271958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.212 [2024-11-26 07:41:45.271971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.212 [2024-11-26 07:41:45.271978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.212 [2024-11-26 07:41:45.271985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.212 [2024-11-26 07:41:45.271999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.213 qpair failed and we were unable to recover it. 00:32:17.213 [2024-11-26 07:41:45.281910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.213 [2024-11-26 07:41:45.281960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.213 [2024-11-26 07:41:45.281973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.213 [2024-11-26 07:41:45.281980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.213 [2024-11-26 07:41:45.281987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.213 [2024-11-26 07:41:45.282000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.213 qpair failed and we were unable to recover it. 00:32:17.213 [2024-11-26 07:41:45.291973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.213 [2024-11-26 07:41:45.292026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.213 [2024-11-26 07:41:45.292041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.213 [2024-11-26 07:41:45.292048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.213 [2024-11-26 07:41:45.292055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.213 [2024-11-26 07:41:45.292069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.213 qpair failed and we were unable to recover it. 00:32:17.474 [2024-11-26 07:41:45.301983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.474 [2024-11-26 07:41:45.302029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.474 [2024-11-26 07:41:45.302042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.474 [2024-11-26 07:41:45.302050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.474 [2024-11-26 07:41:45.302056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.474 [2024-11-26 07:41:45.302070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.474 qpair failed and we were unable to recover it. 00:32:17.474 [2024-11-26 07:41:45.312012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.474 [2024-11-26 07:41:45.312060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.474 [2024-11-26 07:41:45.312074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.474 [2024-11-26 07:41:45.312081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.474 [2024-11-26 07:41:45.312087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.474 [2024-11-26 07:41:45.312101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.474 qpair failed and we were unable to recover it. 00:32:17.474 [2024-11-26 07:41:45.321975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.474 [2024-11-26 07:41:45.322019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.474 [2024-11-26 07:41:45.322032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.474 [2024-11-26 07:41:45.322039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.474 [2024-11-26 07:41:45.322045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.474 [2024-11-26 07:41:45.322059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.474 qpair failed and we were unable to recover it. 00:32:17.474 [2024-11-26 07:41:45.332070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.474 [2024-11-26 07:41:45.332170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.474 [2024-11-26 07:41:45.332184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.474 [2024-11-26 07:41:45.332191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.474 [2024-11-26 07:41:45.332197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.474 [2024-11-26 07:41:45.332211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.474 qpair failed and we were unable to recover it. 00:32:17.474 [2024-11-26 07:41:45.342102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.474 [2024-11-26 07:41:45.342150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.474 [2024-11-26 07:41:45.342166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.474 [2024-11-26 07:41:45.342173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.342179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.342193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.352088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.352170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.352187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.352194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.352200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.352215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.362112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.362157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.362173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.362180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.362186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.362200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.372144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.372195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.372209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.372216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.372223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.372237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.382205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.382256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.382269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.382276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.382282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.382296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.392237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.392289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.392303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.392313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.392319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.392333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.402222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.402273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.402287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.402294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.402300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.402314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.412271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.412321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.412335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.412341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.412348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.412361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.422196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.422254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.422268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.422275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.422281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.422295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.432363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.432414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.432427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.432435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.432441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.432454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.442361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.442409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.442423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.442431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.442438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.442451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.452407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.452454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.452467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.452474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.452480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.452493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.462306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.462348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.475 [2024-11-26 07:41:45.462363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.475 [2024-11-26 07:41:45.462370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.475 [2024-11-26 07:41:45.462376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.475 [2024-11-26 07:41:45.462389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.475 qpair failed and we were unable to recover it. 00:32:17.475 [2024-11-26 07:41:45.472425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.475 [2024-11-26 07:41:45.472472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.472485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.472493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.472499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.472512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.482452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.482528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.482545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.482552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.482558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.482572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.492520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.492591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.492606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.492616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.492625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.492640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.502488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.502531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.502546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.502553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.502559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.502572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.512548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.512596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.512610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.512617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.512623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.512637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.522522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.522568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.522582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.522593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.522599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.522613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.532604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.532663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.532676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.532683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.532690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.532704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.542576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.542656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.542671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.542678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.542684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.542698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.552628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.552675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.552689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.552696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.552702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.552715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.476 [2024-11-26 07:41:45.562602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.476 [2024-11-26 07:41:45.562682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.476 [2024-11-26 07:41:45.562695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.476 [2024-11-26 07:41:45.562702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.476 [2024-11-26 07:41:45.562708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.476 [2024-11-26 07:41:45.562722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.476 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.572707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.572759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.572773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.572780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.572786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.572800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.582661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.582703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.582716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.582724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.582731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.582745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.592759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.592875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.592888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.592896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.592903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.592916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.602758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.602803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.602816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.602823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.602830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.602843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.612802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.612856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.612872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.612879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.612886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.612900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.738 [2024-11-26 07:41:45.622813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.738 [2024-11-26 07:41:45.622862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.738 [2024-11-26 07:41:45.622887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.738 [2024-11-26 07:41:45.622896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.738 [2024-11-26 07:41:45.622903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.738 [2024-11-26 07:41:45.622923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.738 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.632835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.632884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.632908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.632917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.632924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.632943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.642742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.642790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.642806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.642813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.642819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.642834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.652951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.652999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.653014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.653025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.653032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.653046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.662922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.663019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.663044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.663052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.663059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.663078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.672933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.672983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.672998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.673005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.673012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.673027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.683026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.683105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.683120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.683127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.683133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.683147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.693016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.693064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.693079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.693087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.693093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.693108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.703024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.703086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.703102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.703110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.703116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.703131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.712937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.712982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.712999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.713006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.713012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.713027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.723079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.723126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.723140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.723147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.723153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.723171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.733132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.733196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.733210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.733217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.733223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.733237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.743109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.743151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.743172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.743179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.743185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.743199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.753175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.739 [2024-11-26 07:41:45.753256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.739 [2024-11-26 07:41:45.753270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.739 [2024-11-26 07:41:45.753277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.739 [2024-11-26 07:41:45.753283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.739 [2024-11-26 07:41:45.753297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.739 qpair failed and we were unable to recover it. 00:32:17.739 [2024-11-26 07:41:45.763206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.763252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.763266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.763273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.763279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.763293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.773256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.773313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.773326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.773333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.773339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.773353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.783245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.783287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.783301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.783311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.783318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.783332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.793245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.793295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.793308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.793315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.793322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.793335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.803310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.803354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.803369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.803376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.803383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.803397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.813353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.813405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.813419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.813426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.813432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.813446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:17.740 [2024-11-26 07:41:45.823364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:17.740 [2024-11-26 07:41:45.823420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:17.740 [2024-11-26 07:41:45.823433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:17.740 [2024-11-26 07:41:45.823440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:17.740 [2024-11-26 07:41:45.823446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:17.740 [2024-11-26 07:41:45.823460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:17.740 qpair failed and we were unable to recover it. 00:32:18.002 [2024-11-26 07:41:45.833403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.002 [2024-11-26 07:41:45.833461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.002 [2024-11-26 07:41:45.833474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.002 [2024-11-26 07:41:45.833481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.002 [2024-11-26 07:41:45.833487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.002 [2024-11-26 07:41:45.833501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.002 qpair failed and we were unable to recover it. 00:32:18.002 [2024-11-26 07:41:45.843397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.002 [2024-11-26 07:41:45.843444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.002 [2024-11-26 07:41:45.843457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.002 [2024-11-26 07:41:45.843464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.002 [2024-11-26 07:41:45.843470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.002 [2024-11-26 07:41:45.843484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.002 qpair failed and we were unable to recover it. 00:32:18.002 [2024-11-26 07:41:45.853516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.002 [2024-11-26 07:41:45.853574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.853589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.853596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.853604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.853622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.863432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.863486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.863500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.863507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.863514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.863527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.873464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.873506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.873522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.873529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.873536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.873549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.883503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.883555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.883568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.883575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.883581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.883595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.893470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.893517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.893530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.893537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.893543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.893557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.903577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.903622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.903636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.903643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.903650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.903664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.913615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.913657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.913671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.913682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.913689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.913702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.923642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.923698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.923711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.923718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.923724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.923738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.933706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.933757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.933771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.933778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.933785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.933798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.943691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.943737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.943750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.943757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.943763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.943777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.953714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.953780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.953794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.953800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.953807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.953820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.963730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.963776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.963789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.963796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.963802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.963816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.973694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.973758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.003 [2024-11-26 07:41:45.973773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.003 [2024-11-26 07:41:45.973780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.003 [2024-11-26 07:41:45.973786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.003 [2024-11-26 07:41:45.973801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.003 qpair failed and we were unable to recover it. 00:32:18.003 [2024-11-26 07:41:45.983782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.003 [2024-11-26 07:41:45.983840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:45.983853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:45.983861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:45.983867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:45.983880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:45.993794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:45.993836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:45.993850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:45.993857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:45.993863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:45.993876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.003840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.003895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.003924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.003933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.003940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.003959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.013923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.013978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.013994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.014002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.014008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.014023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.023912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.023961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.023975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.023982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.023988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.024002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.033932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.033979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.033992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.033999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.034005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.034019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.043957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.044007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.044020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.044031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.044037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.044052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.054021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.054073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.054086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.054093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.054099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.054113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.063997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.064045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.064058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.064066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.064072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.064085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.074027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.074072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.074086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.074093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.074099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.074112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.004 [2024-11-26 07:41:46.084071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.004 [2024-11-26 07:41:46.084114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.004 [2024-11-26 07:41:46.084128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.004 [2024-11-26 07:41:46.084135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.004 [2024-11-26 07:41:46.084141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.004 [2024-11-26 07:41:46.084155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.004 qpair failed and we were unable to recover it. 00:32:18.266 [2024-11-26 07:41:46.094137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.094195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.094209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.094216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.094223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.094236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.104137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.104200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.104214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.104221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.104227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.104241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.114128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.114180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.114195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.114202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.114208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.114222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.124179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.124248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.124263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.124270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.124276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.124290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.134254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.134305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.134322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.134329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.134335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.134349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.144192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.144234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.144248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.144255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.144261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.144275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.154256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.154299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.154313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.154320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.154326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.154339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.164333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.164415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.164428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.164435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.164442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.164455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.174383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.174469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.174482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.174488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.174498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.174512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.184334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.184427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.184440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.184447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.184453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.184466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.194370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.194416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.194429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.194436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.194442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.194456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.204410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.204460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.204473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.204480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.267 [2024-11-26 07:41:46.204486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.267 [2024-11-26 07:41:46.204500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.267 qpair failed and we were unable to recover it. 00:32:18.267 [2024-11-26 07:41:46.214470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.267 [2024-11-26 07:41:46.214522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.267 [2024-11-26 07:41:46.214535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.267 [2024-11-26 07:41:46.214542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.214549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.214562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.224456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.224524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.224537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.224544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.224550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.224564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.234445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.234490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.234503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.234510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.234516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.234529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.244495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.244541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.244553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.244560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.244566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.244580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.254579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.254631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.254644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.254650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.254657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.254670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.264541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.264587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.264603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.264610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.264616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.264630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.274572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.274627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.274640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.274647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.274653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.274666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.284581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.284628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.284642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.284649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.284655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.284668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.294690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.294743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.294756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.294763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.294769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.294783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.304642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.304683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.304696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.304703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.304712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.304726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.314676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.314729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.314742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.314749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.314755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.314769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.324594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.324638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.324652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.324659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.324665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.324680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.334811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.334923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.268 [2024-11-26 07:41:46.334939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.268 [2024-11-26 07:41:46.334945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.268 [2024-11-26 07:41:46.334952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.268 [2024-11-26 07:41:46.334966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.268 qpair failed and we were unable to recover it. 00:32:18.268 [2024-11-26 07:41:46.344760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.268 [2024-11-26 07:41:46.344815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.269 [2024-11-26 07:41:46.344840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.269 [2024-11-26 07:41:46.344849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.269 [2024-11-26 07:41:46.344856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.269 [2024-11-26 07:41:46.344875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.269 qpair failed and we were unable to recover it. 00:32:18.269 [2024-11-26 07:41:46.354773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.269 [2024-11-26 07:41:46.354819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.269 [2024-11-26 07:41:46.354835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.269 [2024-11-26 07:41:46.354842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.269 [2024-11-26 07:41:46.354849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.269 [2024-11-26 07:41:46.354864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.269 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.364847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.364894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.364907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.364915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.364921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.364935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.374914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.374964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.374978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.374985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.374991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.375005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.384895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.384940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.384953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.384960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.384967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.384980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.394911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.394955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.394972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.394979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.394985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.394999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.404938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.404989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.405002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.405010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.405016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.405030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.415009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.415062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.415076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.415083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.415089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.415103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.424995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.425040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.425053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.425060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.425066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.425080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.434982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.435024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.435037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.435044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.435054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.435068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.445044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.445088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.531 [2024-11-26 07:41:46.445102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.531 [2024-11-26 07:41:46.445110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.531 [2024-11-26 07:41:46.445117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.531 [2024-11-26 07:41:46.445131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.531 qpair failed and we were unable to recover it. 00:32:18.531 [2024-11-26 07:41:46.455128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.531 [2024-11-26 07:41:46.455180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.455193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.455201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.455207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.455221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.465063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.465111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.465125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.465132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.465138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.465152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.475112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.475154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.475173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.475180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.475186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.475200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.485044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.485104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.485118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.485125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.485131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.485145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.495244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.495295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.495309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.495316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.495322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.495336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.505217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.505262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.505276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.505284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.505290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.505304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.515164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.515234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.515247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.515254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.515261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.515274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.525301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.525381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.525397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.525405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.525411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.525424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.535355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.535404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.535417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.535424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.535431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.535444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.545359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.545402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.545417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.545424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.545430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.545445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.555376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.555464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.555477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.555484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.555491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.555504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.565366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.565413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.565426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.565434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.565443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.565457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.575496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.575584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.532 [2024-11-26 07:41:46.575597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.532 [2024-11-26 07:41:46.575604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.532 [2024-11-26 07:41:46.575610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.532 [2024-11-26 07:41:46.575624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.532 qpair failed and we were unable to recover it. 00:32:18.532 [2024-11-26 07:41:46.585406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.532 [2024-11-26 07:41:46.585448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.533 [2024-11-26 07:41:46.585462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.533 [2024-11-26 07:41:46.585469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.533 [2024-11-26 07:41:46.585475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.533 [2024-11-26 07:41:46.585489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.533 qpair failed and we were unable to recover it. 00:32:18.533 [2024-11-26 07:41:46.595446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.533 [2024-11-26 07:41:46.595498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.533 [2024-11-26 07:41:46.595511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.533 [2024-11-26 07:41:46.595518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.533 [2024-11-26 07:41:46.595524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.533 [2024-11-26 07:41:46.595538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.533 qpair failed and we were unable to recover it. 00:32:18.533 [2024-11-26 07:41:46.605489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.533 [2024-11-26 07:41:46.605536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.533 [2024-11-26 07:41:46.605549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.533 [2024-11-26 07:41:46.605556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.533 [2024-11-26 07:41:46.605563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.533 [2024-11-26 07:41:46.605577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.533 qpair failed and we were unable to recover it. 00:32:18.533 [2024-11-26 07:41:46.615567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.533 [2024-11-26 07:41:46.615620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.533 [2024-11-26 07:41:46.615635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.533 [2024-11-26 07:41:46.615642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.533 [2024-11-26 07:41:46.615648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.533 [2024-11-26 07:41:46.615667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.533 qpair failed and we were unable to recover it. 00:32:18.795 [2024-11-26 07:41:46.625523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.795 [2024-11-26 07:41:46.625566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.795 [2024-11-26 07:41:46.625580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.795 [2024-11-26 07:41:46.625588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.795 [2024-11-26 07:41:46.625594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.795 [2024-11-26 07:41:46.625608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.795 qpair failed and we were unable to recover it. 00:32:18.795 [2024-11-26 07:41:46.635576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.795 [2024-11-26 07:41:46.635624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.795 [2024-11-26 07:41:46.635637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.795 [2024-11-26 07:41:46.635644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.795 [2024-11-26 07:41:46.635650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.795 [2024-11-26 07:41:46.635664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.795 qpair failed and we were unable to recover it. 00:32:18.795 [2024-11-26 07:41:46.645614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.795 [2024-11-26 07:41:46.645661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.795 [2024-11-26 07:41:46.645674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.795 [2024-11-26 07:41:46.645681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.795 [2024-11-26 07:41:46.645687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.795 [2024-11-26 07:41:46.645701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.795 qpair failed and we were unable to recover it. 00:32:18.795 [2024-11-26 07:41:46.655678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.795 [2024-11-26 07:41:46.655735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.795 [2024-11-26 07:41:46.655753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.795 [2024-11-26 07:41:46.655760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.795 [2024-11-26 07:41:46.655766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.795 [2024-11-26 07:41:46.655780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.795 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.665665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.665717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.665731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.665738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.665745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.665758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.675687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.675775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.675788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.675795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.675801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.675815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.685721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.685769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.685783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.685790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.685797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.685811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.695753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.695807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.695820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.695828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.695837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.695851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.705775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.705824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.705838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.705844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.705851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.705864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.715804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.715893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.715907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.715914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.715920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.715933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.725711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.725776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.725789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.725796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.725802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.725816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.735908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.735962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.735975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.735982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.735988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.736001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.745884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.745932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.745947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.745954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.745961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.745975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.755903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.755952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.755966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.755973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.755979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.755993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.765932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.765978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.765992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.765999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.766005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.766018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.775995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.776041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.776054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.776061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.776067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.776081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.785992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.796 [2024-11-26 07:41:46.786046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.796 [2024-11-26 07:41:46.786062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.796 [2024-11-26 07:41:46.786069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.796 [2024-11-26 07:41:46.786076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.796 [2024-11-26 07:41:46.786089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.796 qpair failed and we were unable to recover it. 00:32:18.796 [2024-11-26 07:41:46.796017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.796061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.796078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.796085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.796092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.796106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.806018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.806064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.806080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.806088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.806094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.806110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.816034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.816080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.816093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.816100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.816107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.816121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.826093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.826141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.826155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.826167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.826176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.826190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.836091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.836136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.836149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.836156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.836166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.836180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.846156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.846208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.846222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.846229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.846235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.846248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.856178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.856222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.856235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.856242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.856248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.856262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.866212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.866262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.866275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.866282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.866288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.866302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.876126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.876215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.876229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.876236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.876242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.876256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:18.797 [2024-11-26 07:41:46.886248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:18.797 [2024-11-26 07:41:46.886296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:18.797 [2024-11-26 07:41:46.886309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:18.797 [2024-11-26 07:41:46.886316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:18.797 [2024-11-26 07:41:46.886322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:18.797 [2024-11-26 07:41:46.886336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:18.797 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.896298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.896347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.896360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.060 [2024-11-26 07:41:46.896367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.060 [2024-11-26 07:41:46.896373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.060 [2024-11-26 07:41:46.896387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.906302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.906347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.906361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.060 [2024-11-26 07:41:46.906368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.060 [2024-11-26 07:41:46.906374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.060 [2024-11-26 07:41:46.906388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.916344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.916412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.916428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.060 [2024-11-26 07:41:46.916435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.060 [2024-11-26 07:41:46.916441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.060 [2024-11-26 07:41:46.916454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.926372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.926419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.926432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.060 [2024-11-26 07:41:46.926439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.060 [2024-11-26 07:41:46.926445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.060 [2024-11-26 07:41:46.926459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.936379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.936423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.936435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.060 [2024-11-26 07:41:46.936442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.060 [2024-11-26 07:41:46.936449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.060 [2024-11-26 07:41:46.936462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.060 qpair failed and we were unable to recover it. 00:32:19.060 [2024-11-26 07:41:46.946294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.060 [2024-11-26 07:41:46.946345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.060 [2024-11-26 07:41:46.946361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.946368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.946374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.946388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:46.956431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:46.956491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:46.956506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.956513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.956522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.956537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:46.966450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:46.966497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:46.966510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.966517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.966523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.966536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:46.976505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:46.976569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:46.976582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.976589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.976595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.976609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:46.986520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:46.986566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:46.986580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.986587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.986593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.986606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:46.996540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:46.996584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:46.996597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:46.996604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:46.996610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:46.996624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.006578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.006641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.006654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.006662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.006668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.006681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.016607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.016652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.016666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.016673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.016679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.016692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.026616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.026659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.026672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.026679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.026685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.026698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.036664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.036713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.036726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.036733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.036739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.036752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.046650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.046738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.046755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.046762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.046768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.046781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.056737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.056785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.056799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.056806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.056813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.056826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.061 [2024-11-26 07:41:47.066699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.061 [2024-11-26 07:41:47.066745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.061 [2024-11-26 07:41:47.066758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.061 [2024-11-26 07:41:47.066765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.061 [2024-11-26 07:41:47.066771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.061 [2024-11-26 07:41:47.066785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.061 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.076764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.076849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.076862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.076869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.076876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.076889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.086809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.086866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.086879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.086886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.086896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.086909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.096848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.096904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.096929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.096938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.096944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.096963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.106869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.106963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.106988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.106997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.107004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.107023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.116767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.116851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.116867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.116875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.116881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.116896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.126917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.126963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.126977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.126984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.126991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.127005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.136934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.136984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.136998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.137005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.137011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.137025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.062 [2024-11-26 07:41:47.146918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.062 [2024-11-26 07:41:47.146963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.062 [2024-11-26 07:41:47.146976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.062 [2024-11-26 07:41:47.146983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.062 [2024-11-26 07:41:47.146989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.062 [2024-11-26 07:41:47.147003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.062 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.157000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.157045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.157059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.157066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.157072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.157086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.167023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.167069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.167083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.167091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.167097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.167111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.177047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.177098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.177115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.177122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.177129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.177142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.187077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.187129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.187143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.187150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.187156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.187175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.197106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.197149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.197168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.197175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.197181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.197195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.207125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.207177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.207191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.207198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.207204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.207219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.217172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.217220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.217234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.217241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.217250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.217264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.227184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.227232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.227245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.227252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.227258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.227272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.237219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.237269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.237282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.237289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.237295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.237309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.247207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.247254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.247267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.247274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.247281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.247294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.257227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.257280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.257294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.257301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.257308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.257321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.267287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.267342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.267356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.267363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.267369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.267383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.277334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.277381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.277394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.277401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.277408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.277421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.287348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.287396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.287410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.287417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.287423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.287437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.297398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.297448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.297462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.297469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.297475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.323 [2024-11-26 07:41:47.297488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.323 qpair failed and we were unable to recover it. 00:32:19.323 [2024-11-26 07:41:47.307383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.323 [2024-11-26 07:41:47.307462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.323 [2024-11-26 07:41:47.307478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.323 [2024-11-26 07:41:47.307485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.323 [2024-11-26 07:41:47.307491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.307505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.317440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.317484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.317497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.317504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.317510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.317524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.327500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.327559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.327573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.327580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.327586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.327599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.337536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.337626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.337640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.337647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.337653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.337667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.347523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.347575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.347589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.347596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.347605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.347619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.357558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.357605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.357618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.357625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.357631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.357645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.367570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.367632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.367645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.367652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.367658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.367671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.377607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.377654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.377669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.377676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.377682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.377696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.387484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.387527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.387540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.387547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.387553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.387567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.397633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.397680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.397694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.397701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.397707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.397720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.324 [2024-11-26 07:41:47.407669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.324 [2024-11-26 07:41:47.407719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.324 [2024-11-26 07:41:47.407733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.324 [2024-11-26 07:41:47.407740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.324 [2024-11-26 07:41:47.407746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.324 [2024-11-26 07:41:47.407760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.324 qpair failed and we were unable to recover it. 00:32:19.587 [2024-11-26 07:41:47.417582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.587 [2024-11-26 07:41:47.417630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.587 [2024-11-26 07:41:47.417645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.587 [2024-11-26 07:41:47.417652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.587 [2024-11-26 07:41:47.417658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.587 [2024-11-26 07:41:47.417673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.587 qpair failed and we were unable to recover it. 00:32:19.587 [2024-11-26 07:41:47.427725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.587 [2024-11-26 07:41:47.427767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.587 [2024-11-26 07:41:47.427781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.587 [2024-11-26 07:41:47.427788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.587 [2024-11-26 07:41:47.427795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.587 [2024-11-26 07:41:47.427808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.587 qpair failed and we were unable to recover it. 00:32:19.587 [2024-11-26 07:41:47.437762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.587 [2024-11-26 07:41:47.437811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.587 [2024-11-26 07:41:47.437828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.587 [2024-11-26 07:41:47.437835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.437841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.437855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.447786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.447829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.447843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.447851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.447858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.447872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.457803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.457897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.457911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.457918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.457924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.457938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.467811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.467854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.467868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.467875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.467881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.467895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.477854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.477910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.477923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.477930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.477940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.477954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.487892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.487943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.487956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.487963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.487969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.487983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.497917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.497961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.497975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.497982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.497988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.498002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.507945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.507995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.508009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.508016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.508022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.508036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.517957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.518009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.518023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.518030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.518036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.518050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.527956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.528006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.528020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.528026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.528033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.528047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.537919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.537969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.537982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.537989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.537995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.538009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.548036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.548076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.548089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.548096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.548102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.548116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.558041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.558084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.558098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.558105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.588 [2024-11-26 07:41:47.558111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.588 [2024-11-26 07:41:47.558125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.588 qpair failed and we were unable to recover it. 00:32:19.588 [2024-11-26 07:41:47.568099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.588 [2024-11-26 07:41:47.568142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.588 [2024-11-26 07:41:47.568163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.588 [2024-11-26 07:41:47.568170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.568177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.568190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.578056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.578108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.578122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.578130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.578137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.578151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.588139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.588203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.588218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.588225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.588232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.588246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.598148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.598196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.598209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.598216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.598222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.598236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.608205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.608253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.608266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.608273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.608282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.608296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.618257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.618305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.618318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.618326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.618332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.618345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.628117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.628165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.628179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.628186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.628192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.628206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.638238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.638287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.638301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.638308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.638314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.638328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.648298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.648346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.648359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.648366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.648372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.648386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.658339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.658386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.658399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.658406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.658413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.658426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.589 [2024-11-26 07:41:47.668426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.589 [2024-11-26 07:41:47.668494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.589 [2024-11-26 07:41:47.668506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.589 [2024-11-26 07:41:47.668513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.589 [2024-11-26 07:41:47.668519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.589 [2024-11-26 07:41:47.668534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.589 qpair failed and we were unable to recover it. 00:32:19.854 [2024-11-26 07:41:47.678401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.854 [2024-11-26 07:41:47.678489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.854 [2024-11-26 07:41:47.678504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.854 [2024-11-26 07:41:47.678511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.854 [2024-11-26 07:41:47.678518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.854 [2024-11-26 07:41:47.678532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.854 qpair failed and we were unable to recover it. 00:32:19.854 [2024-11-26 07:41:47.688394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.854 [2024-11-26 07:41:47.688444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.854 [2024-11-26 07:41:47.688458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.854 [2024-11-26 07:41:47.688465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.854 [2024-11-26 07:41:47.688471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.854 [2024-11-26 07:41:47.688485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.854 qpair failed and we were unable to recover it. 00:32:19.854 [2024-11-26 07:41:47.698448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.854 [2024-11-26 07:41:47.698497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.854 [2024-11-26 07:41:47.698514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.854 [2024-11-26 07:41:47.698521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.854 [2024-11-26 07:41:47.698527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.854 [2024-11-26 07:41:47.698541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.854 qpair failed and we were unable to recover it. 00:32:19.854 [2024-11-26 07:41:47.708417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.854 [2024-11-26 07:41:47.708493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.854 [2024-11-26 07:41:47.708506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.854 [2024-11-26 07:41:47.708513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.854 [2024-11-26 07:41:47.708520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.854 [2024-11-26 07:41:47.708534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.854 qpair failed and we were unable to recover it. 00:32:19.854 [2024-11-26 07:41:47.718483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.854 [2024-11-26 07:41:47.718545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.854 [2024-11-26 07:41:47.718558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.854 [2024-11-26 07:41:47.718565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.718572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.718585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.728497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.728547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.728561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.728568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.728574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.728588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.738528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.738615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.738628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.738635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.738645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.738659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.748595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.748642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.748655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.748662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.748668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.748682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.758601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.758646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.758660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.758667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.758673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.758687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.768657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.768705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.768720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.768727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.768734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.768747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.778698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.778750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.778763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.778770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.778776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.778789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.788705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.788751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.788765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.788772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.788778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.788792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.798704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.798750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.798766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.798773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.798780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.798795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.808748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.808792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.808807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.808815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.808821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.808835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.818772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.818822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.818835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.818842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.818849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.818863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.828786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.828832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.828849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.828856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.828862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.828876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.838820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.838867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.838881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.838887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.855 [2024-11-26 07:41:47.838894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.855 [2024-11-26 07:41:47.838907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.855 qpair failed and we were unable to recover it. 00:32:19.855 [2024-11-26 07:41:47.848845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.855 [2024-11-26 07:41:47.848907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.855 [2024-11-26 07:41:47.848921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.855 [2024-11-26 07:41:47.848928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.848934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.848947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.858882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.858947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.858960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.858967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.858974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.858987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.868808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.868864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.868878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.868885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.868895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.868909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.878909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.878968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.878982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.878989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.878995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.879009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.888943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.889037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.889050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.889057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.889063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.889076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.898987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.899065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.899078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.899085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.899091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.899105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.908985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.909027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.909041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.909048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.909054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.909068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.919023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.919113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.919126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.919133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.919139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.919153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.929055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.929099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.929112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.929119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.929126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.929140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:19.856 [2024-11-26 07:41:47.939086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:19.856 [2024-11-26 07:41:47.939133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:19.856 [2024-11-26 07:41:47.939146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:19.856 [2024-11-26 07:41:47.939153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:19.856 [2024-11-26 07:41:47.939163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:19.856 [2024-11-26 07:41:47.939177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:19.856 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.948963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.949002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.949017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.949024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.949030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.949045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.959076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.959120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.959137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.959144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.959151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.959168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.969132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.969221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.969235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.969242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.969248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.969261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.979141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.979194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.979207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.979214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.979220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.979233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.989221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.989299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.989312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.989319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.989325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.989339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:47.999268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:47.999359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:47.999373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:47.999380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:47.999390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:47.999404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:48.009251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:48.009300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:48.009314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.120 [2024-11-26 07:41:48.009321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.120 [2024-11-26 07:41:48.009327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.120 [2024-11-26 07:41:48.009340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.120 qpair failed and we were unable to recover it. 00:32:20.120 [2024-11-26 07:41:48.019302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.120 [2024-11-26 07:41:48.019354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.120 [2024-11-26 07:41:48.019367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.019374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.019380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.019394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.029292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.029338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.029352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.029359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.029365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.029378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.039336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.039376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.039389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.039396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.039403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.039416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.049359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.049416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.049430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.049437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.049443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.049456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.059383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.059464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.059478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.059485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.059491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.059505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.069416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.069456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.069469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.069476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.069483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.069496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.079444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.079504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.079518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.079525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.079531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.079544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.089375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.089424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.089442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.089449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.089455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.089469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.099497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.099544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.099557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.099565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.099571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.099584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.109419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:32:20.121 [2024-11-26 07:41:48.109469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:32:20.121 [2024-11-26 07:41:48.109482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:32:20.121 [2024-11-26 07:41:48.109489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:20.121 [2024-11-26 07:41:48.109496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xab00c0 00:32:20.121 [2024-11-26 07:41:48.109509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:32:20.121 qpair failed and we were unable to recover it. 00:32:20.121 [2024-11-26 07:41:48.109642] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:32:20.121 A controller has encountered a failure and is being reset. 00:32:20.121 Controller properly reset. 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Write completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Write completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Write completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Read completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.121 Write completed with error (sct=0, sc=8) 00:32:20.121 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Read completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Read completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Read completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Read completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Read completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 Write completed with error (sct=0, sc=8) 00:32:20.122 starting I/O failed 00:32:20.122 [2024-11-26 07:41:48.166481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:20.122 Initializing NVMe Controllers 00:32:20.122 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:32:20.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:32:20.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:32:20.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:32:20.122 Initialization complete. Launching workers. 00:32:20.122 Starting thread on core 1 00:32:20.122 Starting thread on core 2 00:32:20.122 Starting thread on core 3 00:32:20.122 Starting thread on core 0 00:32:20.122 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:32:20.122 00:32:20.122 real 0m11.344s 00:32:20.122 user 0m21.992s 00:32:20.122 sys 0m3.947s 00:32:20.122 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.122 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:32:20.122 ************************************ 00:32:20.122 END TEST nvmf_target_disconnect_tc2 00:32:20.122 ************************************ 00:32:20.382 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:32:20.382 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:32:20.382 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:32:20.382 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.383 rmmod nvme_tcp 00:32:20.383 rmmod nvme_fabrics 00:32:20.383 rmmod nvme_keyring 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1650484 ']' 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1650484 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1650484 ']' 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1650484 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650484 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650484' 00:32:20.383 killing process with pid 1650484 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1650484 00:32:20.383 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1650484 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.644 07:41:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.559 00:32:22.559 real 0m21.759s 00:32:22.559 user 0m49.496s 00:32:22.559 sys 0m10.115s 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:32:22.559 ************************************ 00:32:22.559 END TEST nvmf_target_disconnect 00:32:22.559 ************************************ 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:22.559 00:32:22.559 real 6m33.110s 00:32:22.559 user 11m21.675s 00:32:22.559 sys 2m16.285s 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.559 07:41:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.559 ************************************ 00:32:22.559 END TEST nvmf_host 00:32:22.559 ************************************ 00:32:22.821 07:41:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:32:22.821 07:41:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:32:22.821 07:41:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:22.821 07:41:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.821 07:41:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.821 07:41:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.821 ************************************ 00:32:22.821 START TEST nvmf_target_core_interrupt_mode 00:32:22.821 ************************************ 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:32:22.821 * Looking for test storage... 00:32:22.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:22.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.821 --rc genhtml_branch_coverage=1 00:32:22.821 --rc genhtml_function_coverage=1 00:32:22.821 --rc genhtml_legend=1 00:32:22.821 --rc geninfo_all_blocks=1 00:32:22.821 --rc geninfo_unexecuted_blocks=1 00:32:22.821 00:32:22.821 ' 00:32:22.821 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:22.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.821 --rc genhtml_branch_coverage=1 00:32:22.821 --rc genhtml_function_coverage=1 00:32:22.821 --rc genhtml_legend=1 00:32:22.821 --rc geninfo_all_blocks=1 00:32:22.821 --rc geninfo_unexecuted_blocks=1 00:32:22.822 00:32:22.822 ' 00:32:22.822 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.822 --rc genhtml_branch_coverage=1 00:32:22.822 --rc genhtml_function_coverage=1 00:32:22.822 --rc genhtml_legend=1 00:32:22.822 --rc geninfo_all_blocks=1 00:32:22.822 --rc geninfo_unexecuted_blocks=1 00:32:22.822 00:32:22.822 ' 00:32:22.822 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:22.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.822 --rc genhtml_branch_coverage=1 00:32:22.822 --rc genhtml_function_coverage=1 00:32:22.822 --rc genhtml_legend=1 00:32:22.822 --rc geninfo_all_blocks=1 00:32:22.822 --rc geninfo_unexecuted_blocks=1 00:32:22.822 00:32:22.822 ' 00:32:22.822 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:32:22.822 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:32:22.822 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:23.084 ************************************ 00:32:23.084 START TEST nvmf_abort 00:32:23.084 ************************************ 00:32:23.084 07:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:32:23.084 * Looking for test storage... 00:32:23.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.084 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:32:23.085 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:32:23.346 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:23.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.347 --rc genhtml_branch_coverage=1 00:32:23.347 --rc genhtml_function_coverage=1 00:32:23.347 --rc genhtml_legend=1 00:32:23.347 --rc geninfo_all_blocks=1 00:32:23.347 --rc geninfo_unexecuted_blocks=1 00:32:23.347 00:32:23.347 ' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:23.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.347 --rc genhtml_branch_coverage=1 00:32:23.347 --rc genhtml_function_coverage=1 00:32:23.347 --rc genhtml_legend=1 00:32:23.347 --rc geninfo_all_blocks=1 00:32:23.347 --rc geninfo_unexecuted_blocks=1 00:32:23.347 00:32:23.347 ' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:23.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.347 --rc genhtml_branch_coverage=1 00:32:23.347 --rc genhtml_function_coverage=1 00:32:23.347 --rc genhtml_legend=1 00:32:23.347 --rc geninfo_all_blocks=1 00:32:23.347 --rc geninfo_unexecuted_blocks=1 00:32:23.347 00:32:23.347 ' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:23.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.347 --rc genhtml_branch_coverage=1 00:32:23.347 --rc genhtml_function_coverage=1 00:32:23.347 --rc genhtml_legend=1 00:32:23.347 --rc geninfo_all_blocks=1 00:32:23.347 --rc geninfo_unexecuted_blocks=1 00:32:23.347 00:32:23.347 ' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.347 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.348 07:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:31.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:31.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:31.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:31.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:31.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:32:31.491 00:32:31.491 --- 10.0.0.2 ping statistics --- 00:32:31.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.491 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:32:31.491 00:32:31.491 --- 10.0.0.1 ping statistics --- 00:32:31.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.491 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1656150 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1656150 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1656150 ']' 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.491 07:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 [2024-11-26 07:41:58.822842] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:31.491 [2024-11-26 07:41:58.823972] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:32:31.491 [2024-11-26 07:41:58.824024] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.491 [2024-11-26 07:41:58.898723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:31.491 [2024-11-26 07:41:58.944851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.491 [2024-11-26 07:41:58.944895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.491 [2024-11-26 07:41:58.944909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.491 [2024-11-26 07:41:58.944914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.491 [2024-11-26 07:41:58.944919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.491 [2024-11-26 07:41:58.946832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.491 [2024-11-26 07:41:58.946991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.491 [2024-11-26 07:41:58.946994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.491 [2024-11-26 07:41:59.017376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:31.491 [2024-11-26 07:41:59.018189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:31.491 [2024-11-26 07:41:59.019188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.491 [2024-11-26 07:41:59.019267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.491 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.491 [2024-11-26 07:41:59.111867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 Malloc0 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 Delay0 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 [2024-11-26 07:41:59.211805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.492 07:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:32:31.492 [2024-11-26 07:41:59.355870] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:33.498 Initializing NVMe Controllers 00:32:33.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:33.498 controller IO queue size 128 less than required 00:32:33.498 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:32:33.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:32:33.498 Initialization complete. Launching workers. 00:32:33.498 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28421 00:32:33.498 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28482, failed to submit 66 00:32:33.498 success 28421, unsuccessful 61, failed 0 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:32:33.498 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.499 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:32:33.499 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.499 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.499 rmmod nvme_tcp 00:32:33.759 rmmod nvme_fabrics 00:32:33.759 rmmod nvme_keyring 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1656150 ']' 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1656150 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1656150 ']' 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1656150 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1656150 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1656150' 00:32:33.759 killing process with pid 1656150 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1656150 00:32:33.759 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1656150 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.020 07:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.931 07:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:35.931 00:32:35.931 real 0m13.004s 00:32:35.931 user 0m11.340s 00:32:35.931 sys 0m6.985s 00:32:35.931 07:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.931 07:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:32:35.931 ************************************ 00:32:35.931 END TEST nvmf_abort 00:32:35.931 ************************************ 00:32:36.192 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.193 ************************************ 00:32:36.193 START TEST nvmf_ns_hotplug_stress 00:32:36.193 ************************************ 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:32:36.193 * Looking for test storage... 00:32:36.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.193 --rc genhtml_branch_coverage=1 00:32:36.193 --rc genhtml_function_coverage=1 00:32:36.193 --rc genhtml_legend=1 00:32:36.193 --rc geninfo_all_blocks=1 00:32:36.193 --rc geninfo_unexecuted_blocks=1 00:32:36.193 00:32:36.193 ' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.193 --rc genhtml_branch_coverage=1 00:32:36.193 --rc genhtml_function_coverage=1 00:32:36.193 --rc genhtml_legend=1 00:32:36.193 --rc geninfo_all_blocks=1 00:32:36.193 --rc geninfo_unexecuted_blocks=1 00:32:36.193 00:32:36.193 ' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.193 --rc genhtml_branch_coverage=1 00:32:36.193 --rc genhtml_function_coverage=1 00:32:36.193 --rc genhtml_legend=1 00:32:36.193 --rc geninfo_all_blocks=1 00:32:36.193 --rc geninfo_unexecuted_blocks=1 00:32:36.193 00:32:36.193 ' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:36.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.193 --rc genhtml_branch_coverage=1 00:32:36.193 --rc genhtml_function_coverage=1 00:32:36.193 --rc genhtml_legend=1 00:32:36.193 --rc geninfo_all_blocks=1 00:32:36.193 --rc geninfo_unexecuted_blocks=1 00:32:36.193 00:32:36.193 ' 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.193 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.454 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.455 07:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:44.598 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:44.599 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:44.599 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:44.599 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:44.599 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:44.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:32:44.599 00:32:44.599 --- 10.0.0.2 ping statistics --- 00:32:44.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.599 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:44.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:32:44.599 00:32:44.599 --- 10.0.0.1 ping statistics --- 00:32:44.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.599 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1661462 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1661462 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1661462 ']' 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.599 07:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.600 [2024-11-26 07:42:11.903660] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:44.600 [2024-11-26 07:42:11.904783] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:32:44.600 [2024-11-26 07:42:11.904834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:44.600 [2024-11-26 07:42:12.005949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:44.600 [2024-11-26 07:42:12.057361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:44.600 [2024-11-26 07:42:12.057413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:44.600 [2024-11-26 07:42:12.057422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:44.600 [2024-11-26 07:42:12.057429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:44.600 [2024-11-26 07:42:12.057436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:44.600 [2024-11-26 07:42:12.059471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.600 [2024-11-26 07:42:12.059633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.600 [2024-11-26 07:42:12.059635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:44.600 [2024-11-26 07:42:12.136219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:44.600 [2024-11-26 07:42:12.137295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:44.600 [2024-11-26 07:42:12.137627] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:44.600 [2024-11-26 07:42:12.137803] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:32:44.860 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:44.860 [2024-11-26 07:42:12.924599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.121 07:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:45.121 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:45.381 [2024-11-26 07:42:13.309508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.381 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.641 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:32:45.641 Malloc0 00:32:45.641 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:45.902 Delay0 00:32:45.902 07:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.162 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:32:46.423 NULL1 00:32:46.423 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:32:46.423 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1661837 00:32:46.423 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:46.423 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:32:46.423 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:46.684 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:46.945 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:32:46.945 07:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:32:47.206 true 00:32:47.206 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:47.206 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.467 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.467 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:32:47.467 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:32:47.728 true 00:32:47.728 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:47.728 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:47.989 07:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.250 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:48.250 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:48.250 true 00:32:48.511 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:48.511 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:48.511 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:48.772 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:48.772 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:49.032 true 00:32:49.032 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:49.032 07:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.292 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.293 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:49.293 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:49.553 true 00:32:49.553 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:49.553 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:49.814 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:49.814 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:49.814 07:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:50.074 true 00:32:50.074 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:50.074 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.334 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:50.594 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:50.594 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:50.594 true 00:32:50.594 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:50.594 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:50.854 07:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.113 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:51.113 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:51.113 true 00:32:51.374 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:51.374 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.374 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:51.634 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:51.634 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:51.893 true 00:32:51.893 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:51.893 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:51.894 07:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.153 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:52.153 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:52.414 true 00:32:52.414 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:52.414 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:52.674 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:52.674 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:52.674 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:52.934 true 00:32:52.934 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:52.934 07:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.194 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.194 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:53.194 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:53.454 true 00:32:53.454 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:53.454 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:53.714 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:53.975 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:53.975 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:53.975 true 00:32:53.975 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:53.975 07:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.234 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:54.493 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:54.493 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:54.493 true 00:32:54.493 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:54.493 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:54.753 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.014 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:55.014 07:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:55.014 true 00:32:55.274 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:55.274 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.274 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:55.534 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:55.534 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:55.795 true 00:32:55.795 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:55.795 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:55.795 07:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.060 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:56.060 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:56.324 true 00:32:56.324 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:56.324 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:56.324 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:56.583 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:56.583 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:56.844 true 00:32:56.844 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:56.844 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.104 07:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.104 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:57.105 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:57.365 true 00:32:57.365 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:57.365 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:57.625 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:57.884 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:57.884 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:57.884 true 00:32:57.884 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:57.884 07:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.143 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.402 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:58.402 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:58.402 true 00:32:58.402 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:58.402 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:58.662 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:58.921 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:58.921 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:58.921 true 00:32:58.921 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:58.921 07:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.182 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.442 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:59.442 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:59.442 true 00:32:59.703 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:32:59.703 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:59.703 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:59.964 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:59.964 07:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:33:00.225 true 00:33:00.225 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:00.225 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:00.225 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:00.485 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:33:00.486 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:33:00.745 true 00:33:00.745 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:00.745 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.006 07:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.006 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:33:01.006 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:33:01.268 true 00:33:01.268 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:01.268 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:01.528 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:01.528 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:33:01.528 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:33:01.787 true 00:33:01.787 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:01.787 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.048 07:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.048 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:33:02.048 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:33:02.309 true 00:33:02.309 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:02.309 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:02.570 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:02.831 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:33:02.831 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:33:02.831 true 00:33:02.831 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:02.831 07:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.091 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.350 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:33:03.350 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:33:03.350 true 00:33:03.350 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:03.350 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:03.609 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:03.868 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:33:03.868 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:33:03.868 true 00:33:04.127 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:04.127 07:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.127 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.385 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:33:04.386 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:33:04.645 true 00:33:04.645 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:04.645 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:04.645 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:04.904 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:33:04.904 07:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:33:05.163 true 00:33:05.163 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:05.163 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.422 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.422 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:33:05.422 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:33:05.681 true 00:33:05.681 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:05.681 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.940 07:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:05.940 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:33:05.940 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:33:06.198 true 00:33:06.198 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:06.198 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.458 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:06.718 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:33:06.718 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:33:06.718 true 00:33:06.718 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:06.718 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:06.977 07:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.237 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:33:07.237 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:33:07.237 true 00:33:07.237 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:07.237 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.499 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:07.759 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:33:07.759 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:33:07.759 true 00:33:08.019 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:08.019 07:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.019 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.279 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:33:08.279 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:33:08.539 true 00:33:08.539 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:08.539 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.539 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:08.799 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:33:08.799 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:33:09.059 true 00:33:09.059 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:09.059 07:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.319 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.319 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:33:09.319 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:33:09.578 true 00:33:09.578 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:09.578 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:09.840 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:09.840 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:33:09.840 07:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:33:10.100 true 00:33:10.100 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:10.100 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.360 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:10.621 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:33:10.621 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:33:10.621 true 00:33:10.621 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:10.621 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:10.881 07:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.140 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:33:11.140 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:33:11.140 true 00:33:11.140 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:11.140 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.399 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:11.658 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:33:11.658 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:33:11.658 true 00:33:11.658 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:11.658 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:11.918 07:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:12.178 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:33:12.178 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:33:12.178 true 00:33:12.439 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:12.439 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.439 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:12.698 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:33:12.699 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:33:12.958 true 00:33:12.958 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:12.958 07:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:12.958 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.217 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:33:13.217 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:33:13.478 true 00:33:13.478 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:13.478 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:13.737 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:13.737 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:33:13.737 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:33:13.996 true 00:33:13.996 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:13.996 07:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.254 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:14.254 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:33:14.254 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:33:14.513 true 00:33:14.513 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:14.513 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.773 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.033 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:33:15.033 07:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:33:15.033 true 00:33:15.033 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:15.033 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.293 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:15.552 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:33:15.552 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:33:15.552 true 00:33:15.812 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:15.812 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:15.812 07:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:16.073 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:33:16.073 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:33:16.333 true 00:33:16.333 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:16.333 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.333 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:16.592 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:33:16.592 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:33:16.851 true 00:33:16.851 Initializing NVMe Controllers 00:33:16.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.851 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:33:16.851 Controller IO queue size 128, less than required. 00:33:16.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:16.851 WARNING: Some requested NVMe devices were skipped 00:33:16.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:33:16.851 Initialization complete. Launching workers. 00:33:16.851 ======================================================== 00:33:16.851 Latency(us) 00:33:16.851 Device Information : IOPS MiB/s Average min max 00:33:16.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30276.97 14.78 4227.59 1118.73 11305.38 00:33:16.851 ======================================================== 00:33:16.851 Total : 30276.97 14.78 4227.59 1118.73 11305.38 00:33:16.851 00:33:16.851 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1661837 00:33:16.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1661837) - No such process 00:33:16.851 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1661837 00:33:16.851 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:16.851 07:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:17.111 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:33:17.111 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:33:17.111 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:33:17.111 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:17.111 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:33:17.371 null0 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:33:17.371 null1 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:17.371 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:33:17.631 null2 00:33:17.631 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:17.631 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:17.631 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:33:17.892 null3 00:33:17.892 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:17.892 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:17.892 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:33:17.892 null4 00:33:18.151 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:18.151 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:18.151 07:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:33:18.152 null5 00:33:18.152 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:18.152 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:18.152 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:33:18.412 null6 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:33:18.412 null7 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:18.412 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1668022 1668025 1668026 1668028 1668030 1668032 1668034 1668036 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:18.413 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:18.753 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.071 07:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:19.071 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:19.071 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:19.071 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:19.071 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.071 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:19.361 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.362 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.622 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:19.883 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:20.144 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:20.144 07:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.144 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.404 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.664 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:20.925 07:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:20.925 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:21.184 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.443 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:21.444 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.704 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:33:21.963 07:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:21.963 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:21.963 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:33:21.963 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:21.963 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.222 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.223 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.223 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.223 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.482 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.482 rmmod nvme_tcp 00:33:22.741 rmmod nvme_fabrics 00:33:22.741 rmmod nvme_keyring 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1661462 ']' 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1661462 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1661462 ']' 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1661462 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661462 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661462' 00:33:22.741 killing process with pid 1661462 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1661462 00:33:22.741 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1661462 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.000 07:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:24.906 00:33:24.906 real 0m48.857s 00:33:24.906 user 3m3.139s 00:33:24.906 sys 0m22.588s 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:24.906 ************************************ 00:33:24.906 END TEST nvmf_ns_hotplug_stress 00:33:24.906 ************************************ 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.906 07:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:25.168 ************************************ 00:33:25.168 START TEST nvmf_delete_subsystem 00:33:25.168 ************************************ 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:33:25.168 * Looking for test storage... 00:33:25.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:25.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.168 --rc genhtml_branch_coverage=1 00:33:25.168 --rc genhtml_function_coverage=1 00:33:25.168 --rc genhtml_legend=1 00:33:25.168 --rc geninfo_all_blocks=1 00:33:25.168 --rc geninfo_unexecuted_blocks=1 00:33:25.168 00:33:25.168 ' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:25.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.168 --rc genhtml_branch_coverage=1 00:33:25.168 --rc genhtml_function_coverage=1 00:33:25.168 --rc genhtml_legend=1 00:33:25.168 --rc geninfo_all_blocks=1 00:33:25.168 --rc geninfo_unexecuted_blocks=1 00:33:25.168 00:33:25.168 ' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:25.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.168 --rc genhtml_branch_coverage=1 00:33:25.168 --rc genhtml_function_coverage=1 00:33:25.168 --rc genhtml_legend=1 00:33:25.168 --rc geninfo_all_blocks=1 00:33:25.168 --rc geninfo_unexecuted_blocks=1 00:33:25.168 00:33:25.168 ' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:25.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.168 --rc genhtml_branch_coverage=1 00:33:25.168 --rc genhtml_function_coverage=1 00:33:25.168 --rc genhtml_legend=1 00:33:25.168 --rc geninfo_all_blocks=1 00:33:25.168 --rc geninfo_unexecuted_blocks=1 00:33:25.168 00:33:25.168 ' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.168 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.169 07:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:33.317 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:33.317 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.317 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:33.318 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:33.318 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:33.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:33:33.318 00:33:33.318 --- 10.0.0.2 ping statistics --- 00:33:33.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.318 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:33.318 00:33:33.318 --- 10.0.0.1 ping statistics --- 00:33:33.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.318 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1673178 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1673178 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1673178 ']' 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.318 07:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.318 [2024-11-26 07:43:00.873685] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:33.318 [2024-11-26 07:43:00.874836] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:33:33.318 [2024-11-26 07:43:00.874888] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.318 [2024-11-26 07:43:00.977743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:33.318 [2024-11-26 07:43:01.030657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.318 [2024-11-26 07:43:01.030719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.318 [2024-11-26 07:43:01.030729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.318 [2024-11-26 07:43:01.030736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.318 [2024-11-26 07:43:01.030742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.318 [2024-11-26 07:43:01.032545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.318 [2024-11-26 07:43:01.032642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.318 [2024-11-26 07:43:01.109983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:33.318 [2024-11-26 07:43:01.110649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:33.318 [2024-11-26 07:43:01.110915] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 [2024-11-26 07:43:01.757633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 [2024-11-26 07:43:01.790144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 NULL1 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 Delay0 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1673517 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:33:33.891 07:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:33.891 [2024-11-26 07:43:01.917523] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:35.808 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.808 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.808 07:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 Write completed with error (sct=0, sc=8) 00:33:36.069 starting I/O failed: -6 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.069 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 [2024-11-26 07:43:04.040881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e680 is same with the state(6) to be set 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 starting I/O failed: -6 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 [2024-11-26 07:43:04.045034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcb5000d350 is same with the state(6) to be set 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Write completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.070 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Write completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:36.071 Read completed with error (sct=0, sc=8) 00:33:37.013 [2024-11-26 07:43:05.016997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204f9a0 is same with the state(6) to be set 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 [2024-11-26 07:43:05.044616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e860 is same with the state(6) to be set 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 [2024-11-26 07:43:05.045024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x204e4a0 is same with the state(6) to be set 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 [2024-11-26 07:43:05.046116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcb5000d680 is same with the state(6) to be set 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Write completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.013 Read completed with error (sct=0, sc=8) 00:33:37.014 Read completed with error (sct=0, sc=8) 00:33:37.014 [2024-11-26 07:43:05.046587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcb5000d020 is same with the state(6) to be set 00:33:37.014 Initializing NVMe Controllers 00:33:37.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.014 Controller IO queue size 128, less than required. 00:33:37.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:37.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:37.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:37.014 Initialization complete. Launching workers. 00:33:37.014 ======================================================== 00:33:37.014 Latency(us) 00:33:37.014 Device Information : IOPS MiB/s Average min max 00:33:37.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.71 0.08 922174.82 346.39 1008346.64 00:33:37.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.69 0.08 910106.52 329.13 1011336.82 00:33:37.014 ======================================================== 00:33:37.014 Total : 320.41 0.16 916046.97 329.13 1011336.82 00:33:37.014 00:33:37.014 [2024-11-26 07:43:05.047014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204f9a0 (9): Bad file descriptor 00:33:37.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:33:37.014 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.014 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:33:37.014 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1673517 00:33:37.014 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1673517 00:33:37.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1673517) - No such process 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1673517 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1673517 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1673517 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:37.587 [2024-11-26 07:43:05.582001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1674193 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:37.587 07:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:37.849 [2024-11-26 07:43:05.684732] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:33:38.109 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:38.109 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:38.109 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:38.682 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:38.682 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:38.682 07:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:39.251 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:39.251 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:39.251 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:39.820 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:39.820 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:39.820 07:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:40.080 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:40.081 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:40.081 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:40.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:40.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:40.651 07:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:33:40.911 Initializing NVMe Controllers 00:33:40.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:40.911 Controller IO queue size 128, less than required. 00:33:40.911 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:40.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:40.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:40.911 Initialization complete. Launching workers. 00:33:40.911 ======================================================== 00:33:40.911 Latency(us) 00:33:40.911 Device Information : IOPS MiB/s Average min max 00:33:40.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002427.11 1000163.77 1005890.91 00:33:40.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003980.13 1000243.88 1009523.37 00:33:40.911 ======================================================== 00:33:40.911 Total : 256.00 0.12 1003203.62 1000163.77 1009523.37 00:33:40.911 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1674193 00:33:41.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1674193) - No such process 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1674193 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.172 rmmod nvme_tcp 00:33:41.172 rmmod nvme_fabrics 00:33:41.172 rmmod nvme_keyring 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1673178 ']' 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1673178 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1673178 ']' 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1673178 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.172 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673178 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673178' 00:33:41.431 killing process with pid 1673178 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1673178 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1673178 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.431 07:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.983 00:33:43.983 real 0m18.452s 00:33:43.983 user 0m26.548s 00:33:43.983 sys 0m7.546s 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:33:43.983 ************************************ 00:33:43.983 END TEST nvmf_delete_subsystem 00:33:43.983 ************************************ 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:43.983 ************************************ 00:33:43.983 START TEST nvmf_host_management 00:33:43.983 ************************************ 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:33:43.983 * Looking for test storage... 00:33:43.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.983 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.984 --rc genhtml_branch_coverage=1 00:33:43.984 --rc genhtml_function_coverage=1 00:33:43.984 --rc genhtml_legend=1 00:33:43.984 --rc geninfo_all_blocks=1 00:33:43.984 --rc geninfo_unexecuted_blocks=1 00:33:43.984 00:33:43.984 ' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.984 --rc genhtml_branch_coverage=1 00:33:43.984 --rc genhtml_function_coverage=1 00:33:43.984 --rc genhtml_legend=1 00:33:43.984 --rc geninfo_all_blocks=1 00:33:43.984 --rc geninfo_unexecuted_blocks=1 00:33:43.984 00:33:43.984 ' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.984 --rc genhtml_branch_coverage=1 00:33:43.984 --rc genhtml_function_coverage=1 00:33:43.984 --rc genhtml_legend=1 00:33:43.984 --rc geninfo_all_blocks=1 00:33:43.984 --rc geninfo_unexecuted_blocks=1 00:33:43.984 00:33:43.984 ' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:43.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.984 --rc genhtml_branch_coverage=1 00:33:43.984 --rc genhtml_function_coverage=1 00:33:43.984 --rc genhtml_legend=1 00:33:43.984 --rc geninfo_all_blocks=1 00:33:43.984 --rc geninfo_unexecuted_blocks=1 00:33:43.984 00:33:43.984 ' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.984 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.985 07:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:52.125 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:52.125 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:52.125 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:52.126 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:52.126 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:52.126 07:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:52.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:33:52.126 00:33:52.126 --- 10.0.0.2 ping statistics --- 00:33:52.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.126 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:52.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:52.126 00:33:52.126 --- 10.0.0.1 ping statistics --- 00:33:52.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.126 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1678941 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1678941 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1678941 ']' 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.126 07:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.126 [2024-11-26 07:43:19.393317] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:52.126 [2024-11-26 07:43:19.394455] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:33:52.126 [2024-11-26 07:43:19.394505] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.126 [2024-11-26 07:43:19.495716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:52.126 [2024-11-26 07:43:19.549090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.126 [2024-11-26 07:43:19.549142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.126 [2024-11-26 07:43:19.549150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.126 [2024-11-26 07:43:19.549166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.126 [2024-11-26 07:43:19.549173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.126 [2024-11-26 07:43:19.551539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:52.126 [2024-11-26 07:43:19.551702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:52.126 [2024-11-26 07:43:19.551898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:52.126 [2024-11-26 07:43:19.551899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.126 [2024-11-26 07:43:19.630051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:52.126 [2024-11-26 07:43:19.631369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:52.126 [2024-11-26 07:43:19.632318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:52.127 [2024-11-26 07:43:19.632373] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:52.127 [2024-11-26 07:43:19.632395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 [2024-11-26 07:43:20.272924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 Malloc0 00:33:52.388 [2024-11-26 07:43:20.369089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1679240 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1679240 /var/tmp/bdevperf.sock 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1679240 ']' 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:52.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.388 { 00:33:52.388 "params": { 00:33:52.388 "name": "Nvme$subsystem", 00:33:52.388 "trtype": "$TEST_TRANSPORT", 00:33:52.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.388 "adrfam": "ipv4", 00:33:52.388 "trsvcid": "$NVMF_PORT", 00:33:52.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.388 "hdgst": ${hdgst:-false}, 00:33:52.388 "ddgst": ${ddgst:-false} 00:33:52.388 }, 00:33:52.388 "method": "bdev_nvme_attach_controller" 00:33:52.388 } 00:33:52.388 EOF 00:33:52.388 )") 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:52.388 07:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.388 "params": { 00:33:52.388 "name": "Nvme0", 00:33:52.388 "trtype": "tcp", 00:33:52.388 "traddr": "10.0.0.2", 00:33:52.388 "adrfam": "ipv4", 00:33:52.388 "trsvcid": "4420", 00:33:52.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.388 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.388 "hdgst": false, 00:33:52.388 "ddgst": false 00:33:52.388 }, 00:33:52.388 "method": "bdev_nvme_attach_controller" 00:33:52.388 }' 00:33:52.649 [2024-11-26 07:43:20.487102] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:33:52.649 [2024-11-26 07:43:20.487191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679240 ] 00:33:52.649 [2024-11-26 07:43:20.581581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.650 [2024-11-26 07:43:20.636262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:52.910 Running I/O for 10 seconds... 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:33:53.483 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=580 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 580 -ge 100 ']' 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.484 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:53.484 [2024-11-26 07:43:21.372619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.372852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c582a0 is same with the state(6) to be set 00:33:53.484 [2024-11-26 07:43:21.373118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.484 [2024-11-26 07:43:21.373607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.484 [2024-11-26 07:43:21.373614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.373994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.485 [2024-11-26 07:43:21.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.485 [2024-11-26 07:43:21.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.486 [2024-11-26 07:43:21.374315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.486 [2024-11-26 07:43:21.374332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:53.486 [2024-11-26 07:43:21.374503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.486 [2024-11-26 07:43:21.374517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.486 [2024-11-26 07:43:21.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.486 [2024-11-26 07:43:21.374551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:53.486 [2024-11-26 07:43:21.374567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.374575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194c000 is same with the state(6) to be set 00:33:53.486 [2024-11-26 07:43:21.375789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:53.486 task offset: 84352 on job bdev=Nvme0n1 fails 00:33:53.486 00:33:53.486 Latency(us) 00:33:53.486 [2024-11-26T06:43:21.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.486 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:53.486 Job: Nvme0n1 ended in about 0.41 seconds with error 00:33:53.486 Verification LBA range: start 0x0 length 0x400 00:33:53.486 Nvme0n1 : 0.41 1578.45 98.65 156.38 0.00 35706.46 1843.20 34078.72 00:33:53.486 [2024-11-26T06:43:21.584Z] =================================================================================================================== 00:33:53.486 [2024-11-26T06:43:21.584Z] Total : 1578.45 98.65 156.38 0.00 35706.46 1843.20 34078.72 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.486 [2024-11-26 07:43:21.378012] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:53.486 [2024-11-26 07:43:21.378053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194c000 (9): Bad file descriptor 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:53.486 [2024-11-26 07:43:21.379636] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:33:53.486 [2024-11-26 07:43:21.379736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:33:53.486 [2024-11-26 07:43:21.379767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:53.486 [2024-11-26 07:43:21.379785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:33:53.486 [2024-11-26 07:43:21.379795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:33:53.486 [2024-11-26 07:43:21.379803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:53.486 [2024-11-26 07:43:21.379811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x194c000 00:33:53.486 [2024-11-26 07:43:21.379834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194c000 (9): Bad file descriptor 00:33:53.486 [2024-11-26 07:43:21.379848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:53.486 [2024-11-26 07:43:21.379857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:53.486 [2024-11-26 07:43:21.379867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:53.486 [2024-11-26 07:43:21.379878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.486 07:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:33:54.426 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1679240 00:33:54.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1679240) - No such process 00:33:54.426 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:33:54.426 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:33:54.426 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:54.427 { 00:33:54.427 "params": { 00:33:54.427 "name": "Nvme$subsystem", 00:33:54.427 "trtype": "$TEST_TRANSPORT", 00:33:54.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.427 "adrfam": "ipv4", 00:33:54.427 "trsvcid": "$NVMF_PORT", 00:33:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.427 "hdgst": ${hdgst:-false}, 00:33:54.427 "ddgst": ${ddgst:-false} 00:33:54.427 }, 00:33:54.427 "method": "bdev_nvme_attach_controller" 00:33:54.427 } 00:33:54.427 EOF 00:33:54.427 )") 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:33:54.427 07:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:54.427 "params": { 00:33:54.427 "name": "Nvme0", 00:33:54.427 "trtype": "tcp", 00:33:54.427 "traddr": "10.0.0.2", 00:33:54.427 "adrfam": "ipv4", 00:33:54.427 "trsvcid": "4420", 00:33:54.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:54.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:54.427 "hdgst": false, 00:33:54.427 "ddgst": false 00:33:54.427 }, 00:33:54.427 "method": "bdev_nvme_attach_controller" 00:33:54.427 }' 00:33:54.427 [2024-11-26 07:43:22.455844] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:33:54.427 [2024-11-26 07:43:22.455923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679592 ] 00:33:54.686 [2024-11-26 07:43:22.548883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.686 [2024-11-26 07:43:22.584084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.947 Running I/O for 1 seconds... 00:33:55.887 1733.00 IOPS, 108.31 MiB/s 00:33:55.887 Latency(us) 00:33:55.887 [2024-11-26T06:43:23.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.887 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:33:55.887 Verification LBA range: start 0x0 length 0x400 00:33:55.887 Nvme0n1 : 1.01 1784.60 111.54 0.00 0.00 35158.48 1774.93 36700.16 00:33:55.887 [2024-11-26T06:43:23.985Z] =================================================================================================================== 00:33:55.887 [2024-11-26T06:43:23.985Z] Total : 1784.60 111.54 0.00 0.00 35158.48 1774.93 36700.16 00:33:55.887 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:33:55.887 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:33:55.887 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:33:56.147 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:33:56.147 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.148 07:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.148 rmmod nvme_tcp 00:33:56.148 rmmod nvme_fabrics 00:33:56.148 rmmod nvme_keyring 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1678941 ']' 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1678941 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1678941 ']' 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1678941 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1678941 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1678941' 00:33:56.148 killing process with pid 1678941 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1678941 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1678941 00:33:56.148 [2024-11-26 07:43:24.218061] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:33:56.148 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.408 07:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:58.319 00:33:58.319 real 0m14.789s 00:33:58.319 user 0m19.466s 00:33:58.319 sys 0m7.620s 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:58.319 ************************************ 00:33:58.319 END TEST nvmf_host_management 00:33:58.319 ************************************ 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.319 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:58.581 ************************************ 00:33:58.581 START TEST nvmf_lvol 00:33:58.581 ************************************ 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:58.581 * Looking for test storage... 00:33:58.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.581 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.582 --rc genhtml_branch_coverage=1 00:33:58.582 --rc genhtml_function_coverage=1 00:33:58.582 --rc genhtml_legend=1 00:33:58.582 --rc geninfo_all_blocks=1 00:33:58.582 --rc geninfo_unexecuted_blocks=1 00:33:58.582 00:33:58.582 ' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.582 --rc genhtml_branch_coverage=1 00:33:58.582 --rc genhtml_function_coverage=1 00:33:58.582 --rc genhtml_legend=1 00:33:58.582 --rc geninfo_all_blocks=1 00:33:58.582 --rc geninfo_unexecuted_blocks=1 00:33:58.582 00:33:58.582 ' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.582 --rc genhtml_branch_coverage=1 00:33:58.582 --rc genhtml_function_coverage=1 00:33:58.582 --rc genhtml_legend=1 00:33:58.582 --rc geninfo_all_blocks=1 00:33:58.582 --rc geninfo_unexecuted_blocks=1 00:33:58.582 00:33:58.582 ' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.582 --rc genhtml_branch_coverage=1 00:33:58.582 --rc genhtml_function_coverage=1 00:33:58.582 --rc genhtml_legend=1 00:33:58.582 --rc geninfo_all_blocks=1 00:33:58.582 --rc geninfo_unexecuted_blocks=1 00:33:58.582 00:33:58.582 ' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.582 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.583 07:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:06.722 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.722 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:06.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:06.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:06.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.723 07:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:34:06.723 00:34:06.723 --- 10.0.0.2 ping statistics --- 00:34:06.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.723 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:34:06.723 00:34:06.723 --- 10.0.0.1 ping statistics --- 00:34:06.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.723 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:06.723 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1684108 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1684108 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1684108 ']' 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.724 07:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:06.724 [2024-11-26 07:43:34.266438] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:06.724 [2024-11-26 07:43:34.267538] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:34:06.724 [2024-11-26 07:43:34.267587] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.724 [2024-11-26 07:43:34.366747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:06.724 [2024-11-26 07:43:34.420179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.724 [2024-11-26 07:43:34.420233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.724 [2024-11-26 07:43:34.420242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.724 [2024-11-26 07:43:34.420250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.724 [2024-11-26 07:43:34.420257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.724 [2024-11-26 07:43:34.422408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.724 [2024-11-26 07:43:34.422636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.724 [2024-11-26 07:43:34.422638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.724 [2024-11-26 07:43:34.499855] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:06.724 [2024-11-26 07:43:34.500898] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:06.724 [2024-11-26 07:43:34.501419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:06.724 [2024-11-26 07:43:34.501556] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:07.295 [2024-11-26 07:43:35.295708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.295 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.555 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:34:07.555 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:07.814 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:34:07.814 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:34:08.074 07:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:34:08.334 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=85abd635-76bc-425f-9278-84b823aa6b11 00:34:08.334 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85abd635-76bc-425f-9278-84b823aa6b11 lvol 20 00:34:08.334 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3cc2cdc5-cfa4-424c-8d38-4a6dc4412d02 00:34:08.334 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:08.593 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3cc2cdc5-cfa4-424c-8d38-4a6dc4412d02 00:34:08.852 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:08.852 [2024-11-26 07:43:36.899701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.852 07:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:09.112 07:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1684631 00:34:09.112 07:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:34:09.112 07:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:34:10.052 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3cc2cdc5-cfa4-424c-8d38-4a6dc4412d02 MY_SNAPSHOT 00:34:10.312 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=16b0639d-474e-4008-8f22-62acbfedf8d6 00:34:10.312 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3cc2cdc5-cfa4-424c-8d38-4a6dc4412d02 30 00:34:10.572 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 16b0639d-474e-4008-8f22-62acbfedf8d6 MY_CLONE 00:34:10.831 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=75a54dfe-2bc4-4ef3-b9f2-4129d54732e2 00:34:10.831 07:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 75a54dfe-2bc4-4ef3-b9f2-4129d54732e2 00:34:11.400 07:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1684631 00:34:19.630 Initializing NVMe Controllers 00:34:19.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:34:19.630 Controller IO queue size 128, less than required. 00:34:19.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:19.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:34:19.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:34:19.630 Initialization complete. Launching workers. 00:34:19.630 ======================================================== 00:34:19.630 Latency(us) 00:34:19.630 Device Information : IOPS MiB/s Average min max 00:34:19.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15180.14 59.30 8436.71 1046.67 52339.77 00:34:19.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15137.34 59.13 8457.78 4168.12 56127.89 00:34:19.630 ======================================================== 00:34:19.630 Total : 30317.49 118.43 8447.23 1046.67 56127.89 00:34:19.630 00:34:19.630 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:19.630 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cc2cdc5-cfa4-424c-8d38-4a6dc4412d02 00:34:19.888 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85abd635-76bc-425f-9278-84b823aa6b11 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.148 07:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:20.148 rmmod nvme_tcp 00:34:20.148 rmmod nvme_fabrics 00:34:20.148 rmmod nvme_keyring 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1684108 ']' 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1684108 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1684108 ']' 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1684108 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684108 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684108' 00:34:20.148 killing process with pid 1684108 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1684108 00:34:20.148 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1684108 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.407 07:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.380 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.380 00:34:22.380 real 0m23.950s 00:34:22.380 user 0m55.848s 00:34:22.381 sys 0m10.958s 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:34:22.381 ************************************ 00:34:22.381 END TEST nvmf_lvol 00:34:22.381 ************************************ 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.381 ************************************ 00:34:22.381 START TEST nvmf_lvs_grow 00:34:22.381 ************************************ 00:34:22.381 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:34:22.643 * Looking for test storage... 00:34:22.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.643 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.644 --rc genhtml_branch_coverage=1 00:34:22.644 --rc genhtml_function_coverage=1 00:34:22.644 --rc genhtml_legend=1 00:34:22.644 --rc geninfo_all_blocks=1 00:34:22.644 --rc geninfo_unexecuted_blocks=1 00:34:22.644 00:34:22.644 ' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.644 --rc genhtml_branch_coverage=1 00:34:22.644 --rc genhtml_function_coverage=1 00:34:22.644 --rc genhtml_legend=1 00:34:22.644 --rc geninfo_all_blocks=1 00:34:22.644 --rc geninfo_unexecuted_blocks=1 00:34:22.644 00:34:22.644 ' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.644 --rc genhtml_branch_coverage=1 00:34:22.644 --rc genhtml_function_coverage=1 00:34:22.644 --rc genhtml_legend=1 00:34:22.644 --rc geninfo_all_blocks=1 00:34:22.644 --rc geninfo_unexecuted_blocks=1 00:34:22.644 00:34:22.644 ' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:22.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.644 --rc genhtml_branch_coverage=1 00:34:22.644 --rc genhtml_function_coverage=1 00:34:22.644 --rc genhtml_legend=1 00:34:22.644 --rc geninfo_all_blocks=1 00:34:22.644 --rc geninfo_unexecuted_blocks=1 00:34:22.644 00:34:22.644 ' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.644 07:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.784 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:30.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:30.785 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:30.785 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:30.785 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.785 07:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:34:30.785 00:34:30.785 --- 10.0.0.2 ping statistics --- 00:34:30.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.785 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:34:30.785 00:34:30.785 --- 10.0.0.1 ping statistics --- 00:34:30.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.785 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1690969 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1690969 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1690969 ']' 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:30.785 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.786 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:30.786 07:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:30.786 [2024-11-26 07:43:58.234213] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:30.786 [2024-11-26 07:43:58.235345] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:34:30.786 [2024-11-26 07:43:58.235396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.786 [2024-11-26 07:43:58.338901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:30.786 [2024-11-26 07:43:58.390644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.786 [2024-11-26 07:43:58.390695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.786 [2024-11-26 07:43:58.390704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.786 [2024-11-26 07:43:58.390711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.786 [2024-11-26 07:43:58.390717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.786 [2024-11-26 07:43:58.391483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:30.786 [2024-11-26 07:43:58.467735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:30.786 [2024-11-26 07:43:58.468030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.046 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:31.308 [2024-11-26 07:43:59.268430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:31.308 ************************************ 00:34:31.308 START TEST lvs_grow_clean 00:34:31.308 ************************************ 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:31.308 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:31.568 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:31.568 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:31.829 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0cc6189c-eb18-4a54-8c58-121801402af7 00:34:31.829 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:31.829 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:32.089 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:32.089 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:32.089 07:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0cc6189c-eb18-4a54-8c58-121801402af7 lvol 150 00:34:32.089 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f425b59-27bc-4903-b302-de4ff551abd3 00:34:32.089 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:32.089 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:32.350 [2024-11-26 07:44:00.336054] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:32.350 [2024-11-26 07:44:00.336267] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:32.350 true 00:34:32.350 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:32.350 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:32.611 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:32.611 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:32.873 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f425b59-27bc-4903-b302-de4ff551abd3 00:34:32.873 07:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.134 [2024-11-26 07:44:01.072750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.134 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1691499 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1691499 /var/tmp/bdevperf.sock 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1691499 ']' 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:33.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.395 07:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:33.395 [2024-11-26 07:44:01.329747] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:34:33.395 [2024-11-26 07:44:01.329824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691499 ] 00:34:33.395 [2024-11-26 07:44:01.422958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.395 [2024-11-26 07:44:01.475711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.337 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.337 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:34:34.337 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:34.598 Nvme0n1 00:34:34.598 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:34.860 [ 00:34:34.860 { 00:34:34.860 "name": "Nvme0n1", 00:34:34.860 "aliases": [ 00:34:34.860 "4f425b59-27bc-4903-b302-de4ff551abd3" 00:34:34.860 ], 00:34:34.860 "product_name": "NVMe disk", 00:34:34.860 "block_size": 4096, 00:34:34.860 "num_blocks": 38912, 00:34:34.860 "uuid": "4f425b59-27bc-4903-b302-de4ff551abd3", 00:34:34.860 "numa_id": 0, 00:34:34.860 "assigned_rate_limits": { 00:34:34.860 "rw_ios_per_sec": 0, 00:34:34.860 "rw_mbytes_per_sec": 0, 00:34:34.860 "r_mbytes_per_sec": 0, 00:34:34.860 "w_mbytes_per_sec": 0 00:34:34.860 }, 00:34:34.860 "claimed": false, 00:34:34.860 "zoned": false, 00:34:34.860 "supported_io_types": { 00:34:34.860 "read": true, 00:34:34.860 "write": true, 00:34:34.860 "unmap": true, 00:34:34.860 "flush": true, 00:34:34.860 "reset": true, 00:34:34.860 "nvme_admin": true, 00:34:34.860 "nvme_io": true, 00:34:34.860 "nvme_io_md": false, 00:34:34.860 "write_zeroes": true, 00:34:34.860 "zcopy": false, 00:34:34.860 "get_zone_info": false, 00:34:34.860 "zone_management": false, 00:34:34.860 "zone_append": false, 00:34:34.860 "compare": true, 00:34:34.860 "compare_and_write": true, 00:34:34.860 "abort": true, 00:34:34.860 "seek_hole": false, 00:34:34.860 "seek_data": false, 00:34:34.860 "copy": true, 00:34:34.860 "nvme_iov_md": false 00:34:34.860 }, 00:34:34.860 "memory_domains": [ 00:34:34.860 { 00:34:34.860 "dma_device_id": "system", 00:34:34.860 "dma_device_type": 1 00:34:34.860 } 00:34:34.860 ], 00:34:34.860 "driver_specific": { 00:34:34.860 "nvme": [ 00:34:34.860 { 00:34:34.860 "trid": { 00:34:34.860 "trtype": "TCP", 00:34:34.860 "adrfam": "IPv4", 00:34:34.860 "traddr": "10.0.0.2", 00:34:34.860 "trsvcid": "4420", 00:34:34.860 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:34.860 }, 00:34:34.860 "ctrlr_data": { 00:34:34.860 "cntlid": 1, 00:34:34.860 "vendor_id": "0x8086", 00:34:34.860 "model_number": "SPDK bdev Controller", 00:34:34.860 "serial_number": "SPDK0", 00:34:34.860 "firmware_revision": "25.01", 00:34:34.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.860 "oacs": { 00:34:34.860 "security": 0, 00:34:34.860 "format": 0, 00:34:34.860 "firmware": 0, 00:34:34.860 "ns_manage": 0 00:34:34.860 }, 00:34:34.860 "multi_ctrlr": true, 00:34:34.860 "ana_reporting": false 00:34:34.860 }, 00:34:34.860 "vs": { 00:34:34.860 "nvme_version": "1.3" 00:34:34.860 }, 00:34:34.860 "ns_data": { 00:34:34.860 "id": 1, 00:34:34.860 "can_share": true 00:34:34.860 } 00:34:34.860 } 00:34:34.860 ], 00:34:34.860 "mp_policy": "active_passive" 00:34:34.860 } 00:34:34.860 } 00:34:34.860 ] 00:34:34.860 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:34.860 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1691700 00:34:34.860 07:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:34.860 Running I/O for 10 seconds... 00:34:35.804 Latency(us) 00:34:35.804 [2024-11-26T06:44:03.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:35.804 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:34:35.804 [2024-11-26T06:44:03.902Z] =================================================================================================================== 00:34:35.804 [2024-11-26T06:44:03.902Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:34:35.804 00:34:36.748 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:36.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:36.748 Nvme0n1 : 2.00 17050.00 66.60 0.00 0.00 0.00 0.00 0.00 00:34:36.748 [2024-11-26T06:44:04.846Z] =================================================================================================================== 00:34:36.748 [2024-11-26T06:44:04.846Z] Total : 17050.00 66.60 0.00 0.00 0.00 0.00 0.00 00:34:36.748 00:34:37.009 true 00:34:37.009 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:37.009 07:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:37.009 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:37.009 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:37.010 07:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1691700 00:34:37.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:37.953 Nvme0n1 : 3.00 17283.33 67.51 0.00 0.00 0.00 0.00 0.00 00:34:37.953 [2024-11-26T06:44:06.051Z] =================================================================================================================== 00:34:37.953 [2024-11-26T06:44:06.051Z] Total : 17283.33 67.51 0.00 0.00 0.00 0.00 0.00 00:34:37.953 00:34:38.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:38.895 Nvme0n1 : 4.00 18074.25 70.60 0.00 0.00 0.00 0.00 0.00 00:34:38.895 [2024-11-26T06:44:06.993Z] =================================================================================================================== 00:34:38.895 [2024-11-26T06:44:06.993Z] Total : 18074.25 70.60 0.00 0.00 0.00 0.00 0.00 00:34:38.895 00:34:39.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:39.836 Nvme0n1 : 5.00 19564.80 76.42 0.00 0.00 0.00 0.00 0.00 00:34:39.836 [2024-11-26T06:44:07.934Z] =================================================================================================================== 00:34:39.836 [2024-11-26T06:44:07.934Z] Total : 19564.80 76.42 0.00 0.00 0.00 0.00 0.00 00:34:39.836 00:34:40.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:40.778 Nvme0n1 : 6.00 20569.17 80.35 0.00 0.00 0.00 0.00 0.00 00:34:40.779 [2024-11-26T06:44:08.877Z] =================================================================================================================== 00:34:40.779 [2024-11-26T06:44:08.877Z] Total : 20569.17 80.35 0.00 0.00 0.00 0.00 0.00 00:34:40.779 00:34:41.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:41.720 Nvme0n1 : 7.00 21295.57 83.19 0.00 0.00 0.00 0.00 0.00 00:34:41.720 [2024-11-26T06:44:09.818Z] =================================================================================================================== 00:34:41.720 [2024-11-26T06:44:09.818Z] Total : 21295.57 83.19 0.00 0.00 0.00 0.00 0.00 00:34:41.720 00:34:43.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:43.104 Nvme0n1 : 8.00 21824.50 85.25 0.00 0.00 0.00 0.00 0.00 00:34:43.104 [2024-11-26T06:44:11.202Z] =================================================================================================================== 00:34:43.104 [2024-11-26T06:44:11.202Z] Total : 21824.50 85.25 0.00 0.00 0.00 0.00 0.00 00:34:43.104 00:34:44.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:44.045 Nvme0n1 : 9.00 22250.00 86.91 0.00 0.00 0.00 0.00 0.00 00:34:44.045 [2024-11-26T06:44:12.143Z] =================================================================================================================== 00:34:44.045 [2024-11-26T06:44:12.143Z] Total : 22250.00 86.91 0.00 0.00 0.00 0.00 0.00 00:34:44.045 00:34:44.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:44.985 Nvme0n1 : 10.00 22590.40 88.24 0.00 0.00 0.00 0.00 0.00 00:34:44.985 [2024-11-26T06:44:13.083Z] =================================================================================================================== 00:34:44.985 [2024-11-26T06:44:13.083Z] Total : 22590.40 88.24 0.00 0.00 0.00 0.00 0.00 00:34:44.985 00:34:44.985 00:34:44.985 Latency(us) 00:34:44.985 [2024-11-26T06:44:13.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:44.985 Nvme0n1 : 10.00 22592.80 88.25 0.00 0.00 5663.02 3358.72 32112.64 00:34:44.985 [2024-11-26T06:44:13.083Z] =================================================================================================================== 00:34:44.985 [2024-11-26T06:44:13.083Z] Total : 22592.80 88.25 0.00 0.00 5663.02 3358.72 32112.64 00:34:44.985 { 00:34:44.985 "results": [ 00:34:44.985 { 00:34:44.985 "job": "Nvme0n1", 00:34:44.985 "core_mask": "0x2", 00:34:44.985 "workload": "randwrite", 00:34:44.985 "status": "finished", 00:34:44.985 "queue_depth": 128, 00:34:44.985 "io_size": 4096, 00:34:44.985 "runtime": 10.004602, 00:34:44.985 "iops": 22592.80279215505, 00:34:44.985 "mibps": 88.25313590685566, 00:34:44.985 "io_failed": 0, 00:34:44.985 "io_timeout": 0, 00:34:44.985 "avg_latency_us": 5663.015891083268, 00:34:44.985 "min_latency_us": 3358.72, 00:34:44.985 "max_latency_us": 32112.64 00:34:44.985 } 00:34:44.985 ], 00:34:44.985 "core_count": 1 00:34:44.985 } 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1691499 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1691499 ']' 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1691499 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1691499 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1691499' 00:34:44.985 killing process with pid 1691499 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1691499 00:34:44.985 Received shutdown signal, test time was about 10.000000 seconds 00:34:44.985 00:34:44.985 Latency(us) 00:34:44.985 [2024-11-26T06:44:13.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.985 [2024-11-26T06:44:13.083Z] =================================================================================================================== 00:34:44.985 [2024-11-26T06:44:13.083Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:44.985 07:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1691499 00:34:44.985 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:45.245 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.506 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:45.506 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:34:45.506 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:34:45.506 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:34:45.506 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:45.767 [2024-11-26 07:44:13.672102] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:34:45.767 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:46.027 request: 00:34:46.027 { 00:34:46.027 "uuid": "0cc6189c-eb18-4a54-8c58-121801402af7", 00:34:46.027 "method": "bdev_lvol_get_lvstores", 00:34:46.027 "req_id": 1 00:34:46.027 } 00:34:46.027 Got JSON-RPC error response 00:34:46.027 response: 00:34:46.027 { 00:34:46.027 "code": -19, 00:34:46.027 "message": "No such device" 00:34:46.027 } 00:34:46.027 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:34:46.027 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.028 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.028 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.028 07:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:46.028 aio_bdev 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4f425b59-27bc-4903-b302-de4ff551abd3 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4f425b59-27bc-4903-b302-de4ff551abd3 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:46.028 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:46.288 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f425b59-27bc-4903-b302-de4ff551abd3 -t 2000 00:34:46.549 [ 00:34:46.549 { 00:34:46.549 "name": "4f425b59-27bc-4903-b302-de4ff551abd3", 00:34:46.549 "aliases": [ 00:34:46.549 "lvs/lvol" 00:34:46.549 ], 00:34:46.549 "product_name": "Logical Volume", 00:34:46.549 "block_size": 4096, 00:34:46.549 "num_blocks": 38912, 00:34:46.549 "uuid": "4f425b59-27bc-4903-b302-de4ff551abd3", 00:34:46.549 "assigned_rate_limits": { 00:34:46.549 "rw_ios_per_sec": 0, 00:34:46.549 "rw_mbytes_per_sec": 0, 00:34:46.549 "r_mbytes_per_sec": 0, 00:34:46.549 "w_mbytes_per_sec": 0 00:34:46.549 }, 00:34:46.549 "claimed": false, 00:34:46.549 "zoned": false, 00:34:46.549 "supported_io_types": { 00:34:46.549 "read": true, 00:34:46.549 "write": true, 00:34:46.549 "unmap": true, 00:34:46.549 "flush": false, 00:34:46.549 "reset": true, 00:34:46.549 "nvme_admin": false, 00:34:46.549 "nvme_io": false, 00:34:46.549 "nvme_io_md": false, 00:34:46.549 "write_zeroes": true, 00:34:46.549 "zcopy": false, 00:34:46.549 "get_zone_info": false, 00:34:46.549 "zone_management": false, 00:34:46.549 "zone_append": false, 00:34:46.549 "compare": false, 00:34:46.549 "compare_and_write": false, 00:34:46.549 "abort": false, 00:34:46.549 "seek_hole": true, 00:34:46.549 "seek_data": true, 00:34:46.549 "copy": false, 00:34:46.549 "nvme_iov_md": false 00:34:46.549 }, 00:34:46.549 "driver_specific": { 00:34:46.549 "lvol": { 00:34:46.549 "lvol_store_uuid": "0cc6189c-eb18-4a54-8c58-121801402af7", 00:34:46.549 "base_bdev": "aio_bdev", 00:34:46.549 "thin_provision": false, 00:34:46.549 "num_allocated_clusters": 38, 00:34:46.549 "snapshot": false, 00:34:46.549 "clone": false, 00:34:46.549 "esnap_clone": false 00:34:46.549 } 00:34:46.549 } 00:34:46.549 } 00:34:46.549 ] 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:46.549 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:46.809 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:46.809 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f425b59-27bc-4903-b302-de4ff551abd3 00:34:47.069 07:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cc6189c-eb18-4a54-8c58-121801402af7 00:34:47.069 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:47.329 00:34:47.329 real 0m15.987s 00:34:47.329 user 0m15.613s 00:34:47.329 sys 0m1.517s 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:34:47.329 ************************************ 00:34:47.329 END TEST lvs_grow_clean 00:34:47.329 ************************************ 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:47.329 ************************************ 00:34:47.329 START TEST lvs_grow_dirty 00:34:47.329 ************************************ 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:34:47.329 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:34:47.590 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:34:47.851 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=47f94f7e-f0fc-4bf2-bc02-457115684b66 00:34:47.851 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:34:47.851 07:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 lvol 150 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:48.112 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:34:48.372 [2024-11-26 07:44:16.348017] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:34:48.372 [2024-11-26 07:44:16.348196] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:34:48.372 true 00:34:48.372 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:34:48.372 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:34:48.633 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:34:48.633 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:34:48.633 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:34:48.894 07:44:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:49.154 [2024-11-26 07:44:17.016601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1694451 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1694451 /var/tmp/bdevperf.sock 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1694451 ']' 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:49.154 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:49.155 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:49.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:49.155 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:49.155 07:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:49.415 [2024-11-26 07:44:17.262852] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:34:49.415 [2024-11-26 07:44:17.262926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694451 ] 00:34:49.415 [2024-11-26 07:44:17.348033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:49.415 [2024-11-26 07:44:17.381565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:49.987 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:49.987 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:34:49.987 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:34:50.248 Nvme0n1 00:34:50.508 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:34:50.508 [ 00:34:50.508 { 00:34:50.508 "name": "Nvme0n1", 00:34:50.508 "aliases": [ 00:34:50.508 "e00a9cb9-6020-45a7-b179-bfc4d546f91a" 00:34:50.508 ], 00:34:50.508 "product_name": "NVMe disk", 00:34:50.508 "block_size": 4096, 00:34:50.508 "num_blocks": 38912, 00:34:50.508 "uuid": "e00a9cb9-6020-45a7-b179-bfc4d546f91a", 00:34:50.508 "numa_id": 0, 00:34:50.508 "assigned_rate_limits": { 00:34:50.508 "rw_ios_per_sec": 0, 00:34:50.508 "rw_mbytes_per_sec": 0, 00:34:50.508 "r_mbytes_per_sec": 0, 00:34:50.508 "w_mbytes_per_sec": 0 00:34:50.508 }, 00:34:50.508 "claimed": false, 00:34:50.508 "zoned": false, 00:34:50.508 "supported_io_types": { 00:34:50.508 "read": true, 00:34:50.508 "write": true, 00:34:50.508 "unmap": true, 00:34:50.508 "flush": true, 00:34:50.508 "reset": true, 00:34:50.508 "nvme_admin": true, 00:34:50.508 "nvme_io": true, 00:34:50.508 "nvme_io_md": false, 00:34:50.508 "write_zeroes": true, 00:34:50.508 "zcopy": false, 00:34:50.508 "get_zone_info": false, 00:34:50.508 "zone_management": false, 00:34:50.508 "zone_append": false, 00:34:50.508 "compare": true, 00:34:50.508 "compare_and_write": true, 00:34:50.508 "abort": true, 00:34:50.508 "seek_hole": false, 00:34:50.508 "seek_data": false, 00:34:50.508 "copy": true, 00:34:50.508 "nvme_iov_md": false 00:34:50.508 }, 00:34:50.508 "memory_domains": [ 00:34:50.508 { 00:34:50.508 "dma_device_id": "system", 00:34:50.508 "dma_device_type": 1 00:34:50.508 } 00:34:50.508 ], 00:34:50.508 "driver_specific": { 00:34:50.508 "nvme": [ 00:34:50.508 { 00:34:50.508 "trid": { 00:34:50.508 "trtype": "TCP", 00:34:50.508 "adrfam": "IPv4", 00:34:50.508 "traddr": "10.0.0.2", 00:34:50.508 "trsvcid": "4420", 00:34:50.508 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:34:50.508 }, 00:34:50.508 "ctrlr_data": { 00:34:50.508 "cntlid": 1, 00:34:50.508 "vendor_id": "0x8086", 00:34:50.509 "model_number": "SPDK bdev Controller", 00:34:50.509 "serial_number": "SPDK0", 00:34:50.509 "firmware_revision": "25.01", 00:34:50.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.509 "oacs": { 00:34:50.509 "security": 0, 00:34:50.509 "format": 0, 00:34:50.509 "firmware": 0, 00:34:50.509 "ns_manage": 0 00:34:50.509 }, 00:34:50.509 "multi_ctrlr": true, 00:34:50.509 "ana_reporting": false 00:34:50.509 }, 00:34:50.509 "vs": { 00:34:50.509 "nvme_version": "1.3" 00:34:50.509 }, 00:34:50.509 "ns_data": { 00:34:50.509 "id": 1, 00:34:50.509 "can_share": true 00:34:50.509 } 00:34:50.509 } 00:34:50.509 ], 00:34:50.509 "mp_policy": "active_passive" 00:34:50.509 } 00:34:50.509 } 00:34:50.509 ] 00:34:50.509 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1694773 00:34:50.509 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:34:50.509 07:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:50.509 Running I/O for 10 seconds... 00:34:51.892 Latency(us) 00:34:51.892 [2024-11-26T06:44:19.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:51.892 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:34:51.892 [2024-11-26T06:44:19.990Z] =================================================================================================================== 00:34:51.892 [2024-11-26T06:44:19.990Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:34:51.892 00:34:52.461 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:34:52.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:52.722 Nvme0n1 : 2.00 17793.50 69.51 0.00 0.00 0.00 0.00 0.00 00:34:52.722 [2024-11-26T06:44:20.820Z] =================================================================================================================== 00:34:52.722 [2024-11-26T06:44:20.820Z] Total : 17793.50 69.51 0.00 0.00 0.00 0.00 0.00 00:34:52.722 00:34:52.722 true 00:34:52.722 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:34:52.722 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:34:52.982 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:34:52.982 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:34:52.982 07:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1694773 00:34:53.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:53.552 Nvme0n1 : 3.00 17916.00 69.98 0.00 0.00 0.00 0.00 0.00 00:34:53.552 [2024-11-26T06:44:21.650Z] =================================================================================================================== 00:34:53.552 [2024-11-26T06:44:21.650Z] Total : 17916.00 69.98 0.00 0.00 0.00 0.00 0.00 00:34:53.552 00:34:54.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:54.934 Nvme0n1 : 4.00 17977.25 70.22 0.00 0.00 0.00 0.00 0.00 00:34:54.934 [2024-11-26T06:44:23.032Z] =================================================================================================================== 00:34:54.934 [2024-11-26T06:44:23.032Z] Total : 17977.25 70.22 0.00 0.00 0.00 0.00 0.00 00:34:54.934 00:34:55.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:55.876 Nvme0n1 : 5.00 18865.00 73.69 0.00 0.00 0.00 0.00 0.00 00:34:55.876 [2024-11-26T06:44:23.974Z] =================================================================================================================== 00:34:55.876 [2024-11-26T06:44:23.974Z] Total : 18865.00 73.69 0.00 0.00 0.00 0.00 0.00 00:34:55.876 00:34:56.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:56.817 Nvme0n1 : 6.00 19994.00 78.10 0.00 0.00 0.00 0.00 0.00 00:34:56.817 [2024-11-26T06:44:24.915Z] =================================================================================================================== 00:34:56.817 [2024-11-26T06:44:24.915Z] Total : 19994.00 78.10 0.00 0.00 0.00 0.00 0.00 00:34:56.817 00:34:57.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:57.759 Nvme0n1 : 7.00 20793.57 81.22 0.00 0.00 0.00 0.00 0.00 00:34:57.759 [2024-11-26T06:44:25.857Z] =================================================================================================================== 00:34:57.759 [2024-11-26T06:44:25.857Z] Total : 20793.57 81.22 0.00 0.00 0.00 0.00 0.00 00:34:57.759 00:34:58.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:58.702 Nvme0n1 : 8.00 21399.25 83.59 0.00 0.00 0.00 0.00 0.00 00:34:58.702 [2024-11-26T06:44:26.800Z] =================================================================================================================== 00:34:58.702 [2024-11-26T06:44:26.800Z] Total : 21399.25 83.59 0.00 0.00 0.00 0.00 0.00 00:34:58.702 00:34:59.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:59.644 Nvme0n1 : 9.00 21872.00 85.44 0.00 0.00 0.00 0.00 0.00 00:34:59.644 [2024-11-26T06:44:27.742Z] =================================================================================================================== 00:34:59.644 [2024-11-26T06:44:27.742Z] Total : 21872.00 85.44 0.00 0.00 0.00 0.00 0.00 00:34:59.644 00:35:00.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.588 Nvme0n1 : 10.00 22250.20 86.91 0.00 0.00 0.00 0.00 0.00 00:35:00.588 [2024-11-26T06:44:28.686Z] =================================================================================================================== 00:35:00.588 [2024-11-26T06:44:28.686Z] Total : 22250.20 86.91 0.00 0.00 0.00 0.00 0.00 00:35:00.588 00:35:00.588 00:35:00.588 Latency(us) 00:35:00.588 [2024-11-26T06:44:28.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:00.588 Nvme0n1 : 10.00 22254.24 86.93 0.00 0.00 5749.18 2894.51 27962.03 00:35:00.588 [2024-11-26T06:44:28.686Z] =================================================================================================================== 00:35:00.588 [2024-11-26T06:44:28.686Z] Total : 22254.24 86.93 0.00 0.00 5749.18 2894.51 27962.03 00:35:00.588 { 00:35:00.588 "results": [ 00:35:00.588 { 00:35:00.588 "job": "Nvme0n1", 00:35:00.588 "core_mask": "0x2", 00:35:00.588 "workload": "randwrite", 00:35:00.588 "status": "finished", 00:35:00.588 "queue_depth": 128, 00:35:00.588 "io_size": 4096, 00:35:00.588 "runtime": 10.003936, 00:35:00.588 "iops": 22254.240730848338, 00:35:00.588 "mibps": 86.93062785487632, 00:35:00.588 "io_failed": 0, 00:35:00.588 "io_timeout": 0, 00:35:00.588 "avg_latency_us": 5749.178385662311, 00:35:00.588 "min_latency_us": 2894.5066666666667, 00:35:00.588 "max_latency_us": 27962.02666666667 00:35:00.588 } 00:35:00.588 ], 00:35:00.588 "core_count": 1 00:35:00.588 } 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1694451 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1694451 ']' 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1694451 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.588 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1694451 00:35:00.848 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:00.849 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:00.849 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1694451' 00:35:00.849 killing process with pid 1694451 00:35:00.849 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1694451 00:35:00.849 Received shutdown signal, test time was about 10.000000 seconds 00:35:00.849 00:35:00.849 Latency(us) 00:35:00.849 [2024-11-26T06:44:28.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.849 [2024-11-26T06:44:28.947Z] =================================================================================================================== 00:35:00.849 [2024-11-26T06:44:28.947Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:00.849 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1694451 00:35:00.849 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:01.109 07:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.109 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:01.109 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1690969 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1690969 00:35:01.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1690969 Killed "${NVMF_APP[@]}" "$@" 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1696803 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1696803 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1696803 ']' 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.369 07:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:01.369 [2024-11-26 07:44:29.447986] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.369 [2024-11-26 07:44:29.448974] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:01.369 [2024-11-26 07:44:29.449016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.630 [2024-11-26 07:44:29.539971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:01.630 [2024-11-26 07:44:29.569569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.630 [2024-11-26 07:44:29.569594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.630 [2024-11-26 07:44:29.569600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.630 [2024-11-26 07:44:29.569604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.630 [2024-11-26 07:44:29.569608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.630 [2024-11-26 07:44:29.570040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.630 [2024-11-26 07:44:29.619708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.630 [2024-11-26 07:44:29.619897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.202 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.202 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:02.202 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.202 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.202 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:02.463 [2024-11-26 07:44:30.468069] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:35:02.463 [2024-11-26 07:44:30.468302] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:35:02.463 [2024-11-26 07:44:30.468392] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:02.463 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:02.724 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e00a9cb9-6020-45a7-b179-bfc4d546f91a -t 2000 00:35:02.985 [ 00:35:02.985 { 00:35:02.985 "name": "e00a9cb9-6020-45a7-b179-bfc4d546f91a", 00:35:02.985 "aliases": [ 00:35:02.985 "lvs/lvol" 00:35:02.985 ], 00:35:02.985 "product_name": "Logical Volume", 00:35:02.985 "block_size": 4096, 00:35:02.985 "num_blocks": 38912, 00:35:02.985 "uuid": "e00a9cb9-6020-45a7-b179-bfc4d546f91a", 00:35:02.985 "assigned_rate_limits": { 00:35:02.985 "rw_ios_per_sec": 0, 00:35:02.985 "rw_mbytes_per_sec": 0, 00:35:02.985 "r_mbytes_per_sec": 0, 00:35:02.985 "w_mbytes_per_sec": 0 00:35:02.985 }, 00:35:02.985 "claimed": false, 00:35:02.985 "zoned": false, 00:35:02.985 "supported_io_types": { 00:35:02.985 "read": true, 00:35:02.985 "write": true, 00:35:02.985 "unmap": true, 00:35:02.985 "flush": false, 00:35:02.985 "reset": true, 00:35:02.985 "nvme_admin": false, 00:35:02.985 "nvme_io": false, 00:35:02.985 "nvme_io_md": false, 00:35:02.985 "write_zeroes": true, 00:35:02.985 "zcopy": false, 00:35:02.985 "get_zone_info": false, 00:35:02.985 "zone_management": false, 00:35:02.985 "zone_append": false, 00:35:02.985 "compare": false, 00:35:02.985 "compare_and_write": false, 00:35:02.985 "abort": false, 00:35:02.985 "seek_hole": true, 00:35:02.985 "seek_data": true, 00:35:02.985 "copy": false, 00:35:02.985 "nvme_iov_md": false 00:35:02.985 }, 00:35:02.985 "driver_specific": { 00:35:02.985 "lvol": { 00:35:02.985 "lvol_store_uuid": "47f94f7e-f0fc-4bf2-bc02-457115684b66", 00:35:02.985 "base_bdev": "aio_bdev", 00:35:02.985 "thin_provision": false, 00:35:02.986 "num_allocated_clusters": 38, 00:35:02.986 "snapshot": false, 00:35:02.986 "clone": false, 00:35:02.986 "esnap_clone": false 00:35:02.986 } 00:35:02.986 } 00:35:02.986 } 00:35:02.986 ] 00:35:02.986 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:02.986 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:35:02.986 07:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:02.986 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:35:02.986 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:02.986 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:35:03.247 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:35:03.247 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:03.508 [2024-11-26 07:44:31.362544] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:03.508 request: 00:35:03.508 { 00:35:03.508 "uuid": "47f94f7e-f0fc-4bf2-bc02-457115684b66", 00:35:03.508 "method": "bdev_lvol_get_lvstores", 00:35:03.508 "req_id": 1 00:35:03.508 } 00:35:03.508 Got JSON-RPC error response 00:35:03.508 response: 00:35:03.508 { 00:35:03.508 "code": -19, 00:35:03.508 "message": "No such device" 00:35:03.508 } 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:03.508 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:03.770 aio_bdev 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:03.770 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:04.031 07:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e00a9cb9-6020-45a7-b179-bfc4d546f91a -t 2000 00:35:04.031 [ 00:35:04.031 { 00:35:04.031 "name": "e00a9cb9-6020-45a7-b179-bfc4d546f91a", 00:35:04.031 "aliases": [ 00:35:04.031 "lvs/lvol" 00:35:04.031 ], 00:35:04.031 "product_name": "Logical Volume", 00:35:04.031 "block_size": 4096, 00:35:04.031 "num_blocks": 38912, 00:35:04.031 "uuid": "e00a9cb9-6020-45a7-b179-bfc4d546f91a", 00:35:04.031 "assigned_rate_limits": { 00:35:04.031 "rw_ios_per_sec": 0, 00:35:04.031 "rw_mbytes_per_sec": 0, 00:35:04.031 "r_mbytes_per_sec": 0, 00:35:04.031 "w_mbytes_per_sec": 0 00:35:04.031 }, 00:35:04.031 "claimed": false, 00:35:04.031 "zoned": false, 00:35:04.031 "supported_io_types": { 00:35:04.031 "read": true, 00:35:04.031 "write": true, 00:35:04.031 "unmap": true, 00:35:04.031 "flush": false, 00:35:04.031 "reset": true, 00:35:04.031 "nvme_admin": false, 00:35:04.031 "nvme_io": false, 00:35:04.031 "nvme_io_md": false, 00:35:04.031 "write_zeroes": true, 00:35:04.031 "zcopy": false, 00:35:04.031 "get_zone_info": false, 00:35:04.031 "zone_management": false, 00:35:04.031 "zone_append": false, 00:35:04.031 "compare": false, 00:35:04.031 "compare_and_write": false, 00:35:04.031 "abort": false, 00:35:04.031 "seek_hole": true, 00:35:04.031 "seek_data": true, 00:35:04.031 "copy": false, 00:35:04.031 "nvme_iov_md": false 00:35:04.031 }, 00:35:04.031 "driver_specific": { 00:35:04.031 "lvol": { 00:35:04.031 "lvol_store_uuid": "47f94f7e-f0fc-4bf2-bc02-457115684b66", 00:35:04.031 "base_bdev": "aio_bdev", 00:35:04.031 "thin_provision": false, 00:35:04.031 "num_allocated_clusters": 38, 00:35:04.031 "snapshot": false, 00:35:04.031 "clone": false, 00:35:04.031 "esnap_clone": false 00:35:04.031 } 00:35:04.031 } 00:35:04.031 } 00:35:04.031 ] 00:35:04.031 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:35:04.031 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:04.031 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:04.292 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:04.292 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:04.292 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:04.552 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:04.552 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e00a9cb9-6020-45a7-b179-bfc4d546f91a 00:35:04.552 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47f94f7e-f0fc-4bf2-bc02-457115684b66 00:35:04.813 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:05.074 07:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:35:05.074 00:35:05.074 real 0m17.599s 00:35:05.074 user 0m35.409s 00:35:05.074 sys 0m3.233s 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:05.074 ************************************ 00:35:05.074 END TEST lvs_grow_dirty 00:35:05.074 ************************************ 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:35:05.074 nvmf_trace.0 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.074 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.074 rmmod nvme_tcp 00:35:05.074 rmmod nvme_fabrics 00:35:05.335 rmmod nvme_keyring 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1696803 ']' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1696803 ']' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1696803' 00:35:05.335 killing process with pid 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1696803 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.335 07:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.402 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.402 00:35:07.403 real 0m45.034s 00:35:07.403 user 0m54.012s 00:35:07.403 sys 0m10.939s 00:35:07.403 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.403 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:07.403 ************************************ 00:35:07.403 END TEST nvmf_lvs_grow 00:35:07.403 ************************************ 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.664 ************************************ 00:35:07.664 START TEST nvmf_bdev_io_wait 00:35:07.664 ************************************ 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:35:07.664 * Looking for test storage... 00:35:07.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.664 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:07.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.926 --rc genhtml_branch_coverage=1 00:35:07.926 --rc genhtml_function_coverage=1 00:35:07.926 --rc genhtml_legend=1 00:35:07.926 --rc geninfo_all_blocks=1 00:35:07.926 --rc geninfo_unexecuted_blocks=1 00:35:07.926 00:35:07.926 ' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:07.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.926 --rc genhtml_branch_coverage=1 00:35:07.926 --rc genhtml_function_coverage=1 00:35:07.926 --rc genhtml_legend=1 00:35:07.926 --rc geninfo_all_blocks=1 00:35:07.926 --rc geninfo_unexecuted_blocks=1 00:35:07.926 00:35:07.926 ' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:07.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.926 --rc genhtml_branch_coverage=1 00:35:07.926 --rc genhtml_function_coverage=1 00:35:07.926 --rc genhtml_legend=1 00:35:07.926 --rc geninfo_all_blocks=1 00:35:07.926 --rc geninfo_unexecuted_blocks=1 00:35:07.926 00:35:07.926 ' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:07.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.926 --rc genhtml_branch_coverage=1 00:35:07.926 --rc genhtml_function_coverage=1 00:35:07.926 --rc genhtml_legend=1 00:35:07.926 --rc geninfo_all_blocks=1 00:35:07.926 --rc geninfo_unexecuted_blocks=1 00:35:07.926 00:35:07.926 ' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:07.926 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.927 07:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.068 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.068 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.068 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.068 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:16.069 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:16.069 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:16.069 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:16.069 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.069 07:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.069 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:35:16.069 00:35:16.070 --- 10.0.0.2 ping statistics --- 00:35:16.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.070 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:35:16.070 00:35:16.070 --- 10.0.0.1 ping statistics --- 00:35:16.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.070 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1701867 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1701867 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1701867 ']' 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.070 07:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.070 [2024-11-26 07:44:43.398258] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:16.070 [2024-11-26 07:44:43.399631] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:16.070 [2024-11-26 07:44:43.399680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.070 [2024-11-26 07:44:43.496409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.070 [2024-11-26 07:44:43.535792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.070 [2024-11-26 07:44:43.535827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.070 [2024-11-26 07:44:43.535835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.070 [2024-11-26 07:44:43.535842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.070 [2024-11-26 07:44:43.535847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.070 [2024-11-26 07:44:43.537487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.070 [2024-11-26 07:44:43.537693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.070 [2024-11-26 07:44:43.537777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.070 [2024-11-26 07:44:43.537777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:16.070 [2024-11-26 07:44:43.538216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 [2024-11-26 07:44:44.302302] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.332 [2024-11-26 07:44:44.303178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:16.332 [2024-11-26 07:44:44.303204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.332 [2024-11-26 07:44:44.303456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 [2024-11-26 07:44:44.314420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 Malloc0 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:16.332 [2024-11-26 07:44:44.386989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1701927 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1701930 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.332 { 00:35:16.332 "params": { 00:35:16.332 "name": "Nvme$subsystem", 00:35:16.332 "trtype": "$TEST_TRANSPORT", 00:35:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.332 "adrfam": "ipv4", 00:35:16.332 "trsvcid": "$NVMF_PORT", 00:35:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.332 "hdgst": ${hdgst:-false}, 00:35:16.332 "ddgst": ${ddgst:-false} 00:35:16.332 }, 00:35:16.332 "method": "bdev_nvme_attach_controller" 00:35:16.332 } 00:35:16.332 EOF 00:35:16.332 )") 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1701932 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1701936 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.332 { 00:35:16.332 "params": { 00:35:16.332 "name": "Nvme$subsystem", 00:35:16.332 "trtype": "$TEST_TRANSPORT", 00:35:16.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.332 "adrfam": "ipv4", 00:35:16.332 "trsvcid": "$NVMF_PORT", 00:35:16.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.332 "hdgst": ${hdgst:-false}, 00:35:16.332 "ddgst": ${ddgst:-false} 00:35:16.332 }, 00:35:16.332 "method": "bdev_nvme_attach_controller" 00:35:16.332 } 00:35:16.332 EOF 00:35:16.332 )") 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.332 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.333 { 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme$subsystem", 00:35:16.333 "trtype": "$TEST_TRANSPORT", 00:35:16.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "$NVMF_PORT", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.333 "hdgst": ${hdgst:-false}, 00:35:16.333 "ddgst": ${ddgst:-false} 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 } 00:35:16.333 EOF 00:35:16.333 )") 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:16.333 { 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme$subsystem", 00:35:16.333 "trtype": "$TEST_TRANSPORT", 00:35:16.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "$NVMF_PORT", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.333 "hdgst": ${hdgst:-false}, 00:35:16.333 "ddgst": ${ddgst:-false} 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 } 00:35:16.333 EOF 00:35:16.333 )") 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1701927 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme1", 00:35:16.333 "trtype": "tcp", 00:35:16.333 "traddr": "10.0.0.2", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "4420", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:16.333 "hdgst": false, 00:35:16.333 "ddgst": false 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 }' 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme1", 00:35:16.333 "trtype": "tcp", 00:35:16.333 "traddr": "10.0.0.2", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "4420", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:16.333 "hdgst": false, 00:35:16.333 "ddgst": false 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 }' 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme1", 00:35:16.333 "trtype": "tcp", 00:35:16.333 "traddr": "10.0.0.2", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "4420", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:16.333 "hdgst": false, 00:35:16.333 "ddgst": false 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 }' 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:35:16.333 07:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:16.333 "params": { 00:35:16.333 "name": "Nvme1", 00:35:16.333 "trtype": "tcp", 00:35:16.333 "traddr": "10.0.0.2", 00:35:16.333 "adrfam": "ipv4", 00:35:16.333 "trsvcid": "4420", 00:35:16.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:16.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:16.333 "hdgst": false, 00:35:16.333 "ddgst": false 00:35:16.333 }, 00:35:16.333 "method": "bdev_nvme_attach_controller" 00:35:16.333 }' 00:35:16.593 [2024-11-26 07:44:44.445334] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:16.593 [2024-11-26 07:44:44.445405] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:35:16.593 [2024-11-26 07:44:44.447370] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:16.593 [2024-11-26 07:44:44.447429] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:35:16.593 [2024-11-26 07:44:44.448350] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:16.593 [2024-11-26 07:44:44.448355] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:16.593 [2024-11-26 07:44:44.448416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 07:44:44.448419] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:35:16.593 --proc-type=auto ] 00:35:16.593 [2024-11-26 07:44:44.639164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.593 [2024-11-26 07:44:44.670719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:16.853 [2024-11-26 07:44:44.699389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.853 [2024-11-26 07:44:44.728529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:16.853 [2024-11-26 07:44:44.746749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.853 [2024-11-26 07:44:44.776033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:16.853 [2024-11-26 07:44:44.805590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.853 [2024-11-26 07:44:44.835169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:16.853 Running I/O for 1 seconds... 00:35:17.113 Running I/O for 1 seconds... 00:35:17.113 Running I/O for 1 seconds... 00:35:17.113 Running I/O for 1 seconds... 00:35:18.054 9070.00 IOPS, 35.43 MiB/s 00:35:18.054 Latency(us) 00:35:18.054 [2024-11-26T06:44:46.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.054 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:35:18.054 Nvme1n1 : 1.02 9075.73 35.45 0.00 0.00 14008.57 3249.49 19770.03 00:35:18.054 [2024-11-26T06:44:46.152Z] =================================================================================================================== 00:35:18.054 [2024-11-26T06:44:46.152Z] Total : 9075.73 35.45 0.00 0.00 14008.57 3249.49 19770.03 00:35:18.054 181744.00 IOPS, 709.94 MiB/s 00:35:18.054 Latency(us) 00:35:18.054 [2024-11-26T06:44:46.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.054 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:35:18.054 Nvme1n1 : 1.00 181386.14 708.54 0.00 0.00 701.24 303.79 1966.08 00:35:18.054 [2024-11-26T06:44:46.152Z] =================================================================================================================== 00:35:18.054 [2024-11-26T06:44:46.152Z] Total : 181386.14 708.54 0.00 0.00 701.24 303.79 1966.08 00:35:18.054 8226.00 IOPS, 32.13 MiB/s 00:35:18.054 Latency(us) 00:35:18.054 [2024-11-26T06:44:46.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.054 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:35:18.054 Nvme1n1 : 1.01 8303.66 32.44 0.00 0.00 15362.74 4751.36 25231.36 00:35:18.054 [2024-11-26T06:44:46.152Z] =================================================================================================================== 00:35:18.054 [2024-11-26T06:44:46.152Z] Total : 8303.66 32.44 0.00 0.00 15362.74 4751.36 25231.36 00:35:18.054 13529.00 IOPS, 52.85 MiB/s 00:35:18.054 Latency(us) 00:35:18.054 [2024-11-26T06:44:46.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.054 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:35:18.054 Nvme1n1 : 1.01 13598.29 53.12 0.00 0.00 9385.39 2116.27 15073.28 00:35:18.054 [2024-11-26T06:44:46.152Z] =================================================================================================================== 00:35:18.054 [2024-11-26T06:44:46.152Z] Total : 13598.29 53.12 0.00 0.00 9385.39 2116.27 15073.28 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1701930 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1701932 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1701936 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.315 rmmod nvme_tcp 00:35:18.315 rmmod nvme_fabrics 00:35:18.315 rmmod nvme_keyring 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1701867 ']' 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1701867 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1701867 ']' 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1701867 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701867 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701867' 00:35:18.315 killing process with pid 1701867 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1701867 00:35:18.315 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1701867 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.576 07:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.122 00:35:21.122 real 0m13.027s 00:35:21.122 user 0m15.687s 00:35:21.122 sys 0m7.553s 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:35:21.122 ************************************ 00:35:21.122 END TEST nvmf_bdev_io_wait 00:35:21.122 ************************************ 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:21.122 ************************************ 00:35:21.122 START TEST nvmf_queue_depth 00:35:21.122 ************************************ 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:35:21.122 * Looking for test storage... 00:35:21.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.122 --rc genhtml_branch_coverage=1 00:35:21.122 --rc genhtml_function_coverage=1 00:35:21.122 --rc genhtml_legend=1 00:35:21.122 --rc geninfo_all_blocks=1 00:35:21.122 --rc geninfo_unexecuted_blocks=1 00:35:21.122 00:35:21.122 ' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.122 --rc genhtml_branch_coverage=1 00:35:21.122 --rc genhtml_function_coverage=1 00:35:21.122 --rc genhtml_legend=1 00:35:21.122 --rc geninfo_all_blocks=1 00:35:21.122 --rc geninfo_unexecuted_blocks=1 00:35:21.122 00:35:21.122 ' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.122 --rc genhtml_branch_coverage=1 00:35:21.122 --rc genhtml_function_coverage=1 00:35:21.122 --rc genhtml_legend=1 00:35:21.122 --rc geninfo_all_blocks=1 00:35:21.122 --rc geninfo_unexecuted_blocks=1 00:35:21.122 00:35:21.122 ' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:21.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.122 --rc genhtml_branch_coverage=1 00:35:21.122 --rc genhtml_function_coverage=1 00:35:21.122 --rc genhtml_legend=1 00:35:21.122 --rc geninfo_all_blocks=1 00:35:21.122 --rc geninfo_unexecuted_blocks=1 00:35:21.122 00:35:21.122 ' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.122 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:35:21.123 07:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.706 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.706 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.706 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.707 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.967 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.968 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.968 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.968 07:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.968 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.968 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.968 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.968 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:28.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:28.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:35:28.228 00:35:28.228 --- 10.0.0.2 ping statistics --- 00:35:28.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.228 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:28.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:28.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:35:28.228 00:35:28.228 --- 10.0.0.1 ping statistics --- 00:35:28.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:28.228 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1706586 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1706586 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1706586 ']' 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.228 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:28.228 [2024-11-26 07:44:56.178722] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:28.228 [2024-11-26 07:44:56.179717] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:28.228 [2024-11-26 07:44:56.179759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:28.228 [2024-11-26 07:44:56.276366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.228 [2024-11-26 07:44:56.317706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:28.228 [2024-11-26 07:44:56.317747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:28.228 [2024-11-26 07:44:56.317757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:28.228 [2024-11-26 07:44:56.317764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:28.228 [2024-11-26 07:44:56.317770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:28.228 [2024-11-26 07:44:56.318415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.489 [2024-11-26 07:44:56.390620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:28.489 [2024-11-26 07:44:56.390912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:29.060 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.060 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:29.060 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.060 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.060 07:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 [2024-11-26 07:44:57.019214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 Malloc0 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.060 [2024-11-26 07:44:57.103353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1706635 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1706635 /var/tmp/bdevperf.sock 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1706635 ']' 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:29.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.060 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:29.321 [2024-11-26 07:44:57.166746] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:35:29.321 [2024-11-26 07:44:57.166801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706635 ] 00:35:29.321 [2024-11-26 07:44:57.254642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.321 [2024-11-26 07:44:57.291687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.890 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.890 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:35:29.890 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:35:29.890 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.890 07:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:30.151 NVMe0n1 00:35:30.151 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.151 07:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:30.411 Running I/O for 10 seconds... 00:35:32.293 8206.00 IOPS, 32.05 MiB/s [2024-11-26T06:45:01.333Z] 8714.50 IOPS, 34.04 MiB/s [2024-11-26T06:45:02.716Z] 9559.33 IOPS, 37.34 MiB/s [2024-11-26T06:45:03.287Z] 10501.00 IOPS, 41.02 MiB/s [2024-11-26T06:45:04.671Z] 11063.60 IOPS, 43.22 MiB/s [2024-11-26T06:45:05.612Z] 11446.17 IOPS, 44.71 MiB/s [2024-11-26T06:45:06.553Z] 11751.43 IOPS, 45.90 MiB/s [2024-11-26T06:45:07.494Z] 12010.75 IOPS, 46.92 MiB/s [2024-11-26T06:45:08.435Z] 12186.89 IOPS, 47.61 MiB/s [2024-11-26T06:45:08.435Z] 12345.00 IOPS, 48.22 MiB/s 00:35:40.337 Latency(us) 00:35:40.337 [2024-11-26T06:45:08.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.337 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:35:40.337 Verification LBA range: start 0x0 length 0x4000 00:35:40.337 NVMe0n1 : 10.05 12381.26 48.36 0.00 0.00 82389.18 16711.68 74274.13 00:35:40.337 [2024-11-26T06:45:08.435Z] =================================================================================================================== 00:35:40.337 [2024-11-26T06:45:08.435Z] Total : 12381.26 48.36 0.00 0.00 82389.18 16711.68 74274.13 00:35:40.337 { 00:35:40.337 "results": [ 00:35:40.337 { 00:35:40.337 "job": "NVMe0n1", 00:35:40.337 "core_mask": "0x1", 00:35:40.337 "workload": "verify", 00:35:40.337 "status": "finished", 00:35:40.337 "verify_range": { 00:35:40.337 "start": 0, 00:35:40.337 "length": 16384 00:35:40.337 }, 00:35:40.337 "queue_depth": 1024, 00:35:40.337 "io_size": 4096, 00:35:40.337 "runtime": 10.05342, 00:35:40.337 "iops": 12381.259312751283, 00:35:40.337 "mibps": 48.3642941904347, 00:35:40.337 "io_failed": 0, 00:35:40.337 "io_timeout": 0, 00:35:40.337 "avg_latency_us": 82389.18221036789, 00:35:40.337 "min_latency_us": 16711.68, 00:35:40.337 "max_latency_us": 74274.13333333333 00:35:40.337 } 00:35:40.337 ], 00:35:40.337 "core_count": 1 00:35:40.337 } 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1706635 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1706635 ']' 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1706635 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.337 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706635 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706635' 00:35:40.598 killing process with pid 1706635 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1706635 00:35:40.598 Received shutdown signal, test time was about 10.000000 seconds 00:35:40.598 00:35:40.598 Latency(us) 00:35:40.598 [2024-11-26T06:45:08.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.598 [2024-11-26T06:45:08.696Z] =================================================================================================================== 00:35:40.598 [2024-11-26T06:45:08.696Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1706635 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:40.598 rmmod nvme_tcp 00:35:40.598 rmmod nvme_fabrics 00:35:40.598 rmmod nvme_keyring 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1706586 ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1706586 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1706586 ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1706586 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706586 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706586' 00:35:40.598 killing process with pid 1706586 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1706586 00:35:40.598 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1706586 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:40.857 07:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.397 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.397 00:35:43.397 real 0m22.200s 00:35:43.397 user 0m24.623s 00:35:43.397 sys 0m7.197s 00:35:43.397 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:35:43.398 ************************************ 00:35:43.398 END TEST nvmf_queue_depth 00:35:43.398 ************************************ 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:43.398 ************************************ 00:35:43.398 START TEST nvmf_target_multipath 00:35:43.398 ************************************ 00:35:43.398 07:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:35:43.398 * Looking for test storage... 00:35:43.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.398 --rc genhtml_branch_coverage=1 00:35:43.398 --rc genhtml_function_coverage=1 00:35:43.398 --rc genhtml_legend=1 00:35:43.398 --rc geninfo_all_blocks=1 00:35:43.398 --rc geninfo_unexecuted_blocks=1 00:35:43.398 00:35:43.398 ' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.398 --rc genhtml_branch_coverage=1 00:35:43.398 --rc genhtml_function_coverage=1 00:35:43.398 --rc genhtml_legend=1 00:35:43.398 --rc geninfo_all_blocks=1 00:35:43.398 --rc geninfo_unexecuted_blocks=1 00:35:43.398 00:35:43.398 ' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.398 --rc genhtml_branch_coverage=1 00:35:43.398 --rc genhtml_function_coverage=1 00:35:43.398 --rc genhtml_legend=1 00:35:43.398 --rc geninfo_all_blocks=1 00:35:43.398 --rc geninfo_unexecuted_blocks=1 00:35:43.398 00:35:43.398 ' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:43.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.398 --rc genhtml_branch_coverage=1 00:35:43.398 --rc genhtml_function_coverage=1 00:35:43.398 --rc genhtml_legend=1 00:35:43.398 --rc geninfo_all_blocks=1 00:35:43.398 --rc geninfo_unexecuted_blocks=1 00:35:43.398 00:35:43.398 ' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.398 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:35:43.399 07:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:51.537 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:51.537 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.537 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:51.538 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:51.538 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:35:51.538 00:35:51.538 --- 10.0.0.2 ping statistics --- 00:35:51.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.538 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:35:51.538 00:35:51.538 --- 10.0.0.1 ping statistics --- 00:35:51.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.538 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:35:51.538 only one NIC for nvmf test 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:51.538 rmmod nvme_tcp 00:35:51.538 rmmod nvme_fabrics 00:35:51.538 rmmod nvme_keyring 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:51.538 07:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:52.926 00:35:52.926 real 0m9.679s 00:35:52.926 user 0m2.111s 00:35:52.926 sys 0m5.527s 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:35:52.926 ************************************ 00:35:52.926 END TEST nvmf_target_multipath 00:35:52.926 ************************************ 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:52.926 ************************************ 00:35:52.926 START TEST nvmf_zcopy 00:35:52.926 ************************************ 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:35:52.926 * Looking for test storage... 00:35:52.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.926 --rc genhtml_branch_coverage=1 00:35:52.926 --rc genhtml_function_coverage=1 00:35:52.926 --rc genhtml_legend=1 00:35:52.926 --rc geninfo_all_blocks=1 00:35:52.926 --rc geninfo_unexecuted_blocks=1 00:35:52.926 00:35:52.926 ' 00:35:52.926 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.927 --rc genhtml_branch_coverage=1 00:35:52.927 --rc genhtml_function_coverage=1 00:35:52.927 --rc genhtml_legend=1 00:35:52.927 --rc geninfo_all_blocks=1 00:35:52.927 --rc geninfo_unexecuted_blocks=1 00:35:52.927 00:35:52.927 ' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.927 --rc genhtml_branch_coverage=1 00:35:52.927 --rc genhtml_function_coverage=1 00:35:52.927 --rc genhtml_legend=1 00:35:52.927 --rc geninfo_all_blocks=1 00:35:52.927 --rc geninfo_unexecuted_blocks=1 00:35:52.927 00:35:52.927 ' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.927 --rc genhtml_branch_coverage=1 00:35:52.927 --rc genhtml_function_coverage=1 00:35:52.927 --rc genhtml_legend=1 00:35:52.927 --rc geninfo_all_blocks=1 00:35:52.927 --rc geninfo_unexecuted_blocks=1 00:35:52.927 00:35:52.927 ' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.927 07:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:01.070 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:01.070 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:01.070 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:01.070 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.070 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.071 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.071 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.071 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.071 07:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:36:01.071 00:36:01.071 --- 10.0.0.2 ping statistics --- 00:36:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.071 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:36:01.071 00:36:01.071 --- 10.0.0.1 ping statistics --- 00:36:01.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.071 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1717670 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1717670 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1717670 ']' 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:01.071 07:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 [2024-11-26 07:45:28.269780] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:01.071 [2024-11-26 07:45:28.270755] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:36:01.071 [2024-11-26 07:45:28.270793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:01.071 [2024-11-26 07:45:28.365659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.071 [2024-11-26 07:45:28.400361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:01.071 [2024-11-26 07:45:28.400393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:01.071 [2024-11-26 07:45:28.400401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:01.071 [2024-11-26 07:45:28.400407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:01.071 [2024-11-26 07:45:28.400413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:01.071 [2024-11-26 07:45:28.400949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:01.071 [2024-11-26 07:45:28.455460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:01.071 [2024-11-26 07:45:28.455714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 [2024-11-26 07:45:29.121739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.071 [2024-11-26 07:45:29.149995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.071 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.331 malloc0 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:01.331 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:01.331 { 00:36:01.331 "params": { 00:36:01.331 "name": "Nvme$subsystem", 00:36:01.331 "trtype": "$TEST_TRANSPORT", 00:36:01.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.332 "adrfam": "ipv4", 00:36:01.332 "trsvcid": "$NVMF_PORT", 00:36:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.332 "hdgst": ${hdgst:-false}, 00:36:01.332 "ddgst": ${ddgst:-false} 00:36:01.332 }, 00:36:01.332 "method": "bdev_nvme_attach_controller" 00:36:01.332 } 00:36:01.332 EOF 00:36:01.332 )") 00:36:01.332 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:01.332 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:01.332 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:01.332 07:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:01.332 "params": { 00:36:01.332 "name": "Nvme1", 00:36:01.332 "trtype": "tcp", 00:36:01.332 "traddr": "10.0.0.2", 00:36:01.332 "adrfam": "ipv4", 00:36:01.332 "trsvcid": "4420", 00:36:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:01.332 "hdgst": false, 00:36:01.332 "ddgst": false 00:36:01.332 }, 00:36:01.332 "method": "bdev_nvme_attach_controller" 00:36:01.332 }' 00:36:01.332 [2024-11-26 07:45:29.254155] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:36:01.332 [2024-11-26 07:45:29.254237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717877 ] 00:36:01.332 [2024-11-26 07:45:29.347613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.332 [2024-11-26 07:45:29.400105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.592 Running I/O for 10 seconds... 00:36:03.916 6636.00 IOPS, 51.84 MiB/s [2024-11-26T06:45:32.954Z] 6615.00 IOPS, 51.68 MiB/s [2024-11-26T06:45:33.896Z] 6625.67 IOPS, 51.76 MiB/s [2024-11-26T06:45:34.837Z] 6644.50 IOPS, 51.91 MiB/s [2024-11-26T06:45:35.778Z] 6648.20 IOPS, 51.94 MiB/s [2024-11-26T06:45:36.815Z] 7061.00 IOPS, 55.16 MiB/s [2024-11-26T06:45:37.844Z] 7433.43 IOPS, 58.07 MiB/s [2024-11-26T06:45:38.786Z] 7719.62 IOPS, 60.31 MiB/s [2024-11-26T06:45:39.732Z] 7941.89 IOPS, 62.05 MiB/s [2024-11-26T06:45:39.732Z] 8119.90 IOPS, 63.44 MiB/s 00:36:11.634 Latency(us) 00:36:11.634 [2024-11-26T06:45:39.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:36:11.634 Verification LBA range: start 0x0 length 0x1000 00:36:11.634 Nvme1n1 : 10.01 8123.21 63.46 0.00 0.00 15714.02 1351.68 27962.03 00:36:11.634 [2024-11-26T06:45:39.732Z] =================================================================================================================== 00:36:11.634 [2024-11-26T06:45:39.732Z] Total : 8123.21 63.46 0.00 0.00 15714.02 1351.68 27962.03 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1719885 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:11.895 { 00:36:11.895 "params": { 00:36:11.895 "name": "Nvme$subsystem", 00:36:11.895 "trtype": "$TEST_TRANSPORT", 00:36:11.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.895 "adrfam": "ipv4", 00:36:11.895 "trsvcid": "$NVMF_PORT", 00:36:11.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.895 "hdgst": ${hdgst:-false}, 00:36:11.895 "ddgst": ${ddgst:-false} 00:36:11.895 }, 00:36:11.895 "method": "bdev_nvme_attach_controller" 00:36:11.895 } 00:36:11.895 EOF 00:36:11.895 )") 00:36:11.895 [2024-11-26 07:45:39.745278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.745306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:11.895 07:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:11.895 "params": { 00:36:11.895 "name": "Nvme1", 00:36:11.895 "trtype": "tcp", 00:36:11.895 "traddr": "10.0.0.2", 00:36:11.895 "adrfam": "ipv4", 00:36:11.895 "trsvcid": "4420", 00:36:11.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.895 "hdgst": false, 00:36:11.895 "ddgst": false 00:36:11.895 }, 00:36:11.895 "method": "bdev_nvme_attach_controller" 00:36:11.895 }' 00:36:11.895 [2024-11-26 07:45:39.757246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.757256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.769244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.769251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.781243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.781250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.789622] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:36:11.895 [2024-11-26 07:45:39.789669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719885 ] 00:36:11.895 [2024-11-26 07:45:39.793243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.793250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.805244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.805252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.817243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.817251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.829243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.829250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.841244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.841251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.853243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.853250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.865243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.865250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.871337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.895 [2024-11-26 07:45:39.877244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.877252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.889243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.889252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.900743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.895 [2024-11-26 07:45:39.901244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.901252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.913249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.913260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.925249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.925263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.937245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.937255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.949246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.949256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.961243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.961251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.973251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.973267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:11.895 [2024-11-26 07:45:39.985246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:11.895 [2024-11-26 07:45:39.985255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:39.997248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:39.997261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.009246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.009266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.021243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.021251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.033244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.033252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.045244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.045254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.057243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.057251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.069243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.069252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.081245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.081253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.093244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.093253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.105243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.105251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.117244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.117251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.129244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.129251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.141246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.141256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.153243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.153250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.165243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.165250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.177244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.177251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.189250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.189265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 Running I/O for 5 seconds... 00:36:12.157 [2024-11-26 07:45:40.206565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.206581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.220669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.220684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.233474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.233490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.157 [2024-11-26 07:45:40.246064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.157 [2024-11-26 07:45:40.246079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.260454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.260469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.273454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.273468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.286345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.286359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.300636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.300650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.313464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.313478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.325679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.325693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.339776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.339792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.353206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.353221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.366239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.366254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.380263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.380277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.392888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.392902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.405981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.405995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.420244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.420264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.433308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.433323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.445827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.445841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.460470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.460485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.473340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.473355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.486180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.486195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.418 [2024-11-26 07:45:40.500435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.418 [2024-11-26 07:45:40.500451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.513654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.513668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.528420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.528435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.541575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.541589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.556763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.556778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.569748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.569762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.584285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.584300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.597445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.597460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.610283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.610298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.624637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.624651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.637643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.637658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.652222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.652237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.665086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.665101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.678626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.678645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.692614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.692629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.705436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.705450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.718194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.718208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.732168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.732182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.745215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.745230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.758387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.758402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.681 [2024-11-26 07:45:40.772451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.681 [2024-11-26 07:45:40.772465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.942 [2024-11-26 07:45:40.785403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.942 [2024-11-26 07:45:40.785418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.942 [2024-11-26 07:45:40.798122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.942 [2024-11-26 07:45:40.798136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.942 [2024-11-26 07:45:40.812303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.942 [2024-11-26 07:45:40.812318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.825352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.825367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.838348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.838363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.852731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.852746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.865974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.865988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.879905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.879920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.892748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.892764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.906067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.906081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.920550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.920564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.933784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.933807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.948550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.948565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.961635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.961649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.976771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.976787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:40.989743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:40.989757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:41.004498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:41.004513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:41.017716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:41.017730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:12.943 [2024-11-26 07:45:41.032337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:12.943 [2024-11-26 07:45:41.032352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.203 [2024-11-26 07:45:41.045621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.203 [2024-11-26 07:45:41.045636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.203 [2024-11-26 07:45:41.060380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.203 [2024-11-26 07:45:41.060395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.203 [2024-11-26 07:45:41.073369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.203 [2024-11-26 07:45:41.073384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.203 [2024-11-26 07:45:41.086265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.203 [2024-11-26 07:45:41.086279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.203 [2024-11-26 07:45:41.100099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.203 [2024-11-26 07:45:41.100114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.113340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.113355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.126203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.126217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.140273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.140288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.153646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.153660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.168706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.168720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.181901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.181916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.196135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.196154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 19015.00 IOPS, 148.55 MiB/s [2024-11-26T06:45:41.302Z] [2024-11-26 07:45:41.209187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.209202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.222197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.222211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.236702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.236716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.250000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.250014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.264757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.264771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.277414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.277429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.204 [2024-11-26 07:45:41.290227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.204 [2024-11-26 07:45:41.290241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.304385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.304400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.317304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.317318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.330146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.330164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.344662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.344676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.357679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.357693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.372427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.372441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.385294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.385309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.397995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.398010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.412421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.412436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.425441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.425456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.438242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.438256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.452691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.452705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.465328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.465342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.478527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.478542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.492461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.492476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.505552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.505566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.520325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.520339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.533560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.533574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.464 [2024-11-26 07:45:41.548456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.464 [2024-11-26 07:45:41.548471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.724 [2024-11-26 07:45:41.561330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.724 [2024-11-26 07:45:41.561345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.724 [2024-11-26 07:45:41.574128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.574141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.588300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.588314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.601272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.601287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.614760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.614774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.628765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.628780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.641676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.641690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.656171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.656186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.669265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.669280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.682286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.682300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.696210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.696225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.709017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.709032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.721857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.721871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.736373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.736388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.749492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.749506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.762562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.762576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.776697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.776711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.789796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.789810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.725 [2024-11-26 07:45:41.804522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.725 [2024-11-26 07:45:41.804536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.986 [2024-11-26 07:45:41.817733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.986 [2024-11-26 07:45:41.817750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.832441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.832455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.845393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.845407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.858515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.858529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.872349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.872364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.885218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.885232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.898573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.898587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.912843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.912857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.926082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.926097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.940501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.940516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.953586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.953600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.968510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.968525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.981438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.981453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:41.994278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:41.994292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.008447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.008461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.021405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.021420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.034306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.034320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.048279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.048293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.061230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.061244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:13.987 [2024-11-26 07:45:42.074353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:13.987 [2024-11-26 07:45:42.074367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.088733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.088748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.101918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.101933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.116346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.116360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.129039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.129053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.141763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.141776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.156179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.156193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.169102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.169116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.182269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.182283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.196897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.196913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 19049.00 IOPS, 148.82 MiB/s [2024-11-26T06:45:42.346Z] [2024-11-26 07:45:42.209904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.209921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.224613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.224628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.237491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.237505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.250079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.250094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.264546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.264561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.277684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.277698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.292269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.292284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.305059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.305073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.317984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.317997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.248 [2024-11-26 07:45:42.332080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.248 [2024-11-26 07:45:42.332094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.345073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.345088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.357692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.357706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.372768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.372783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.386008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.386022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.400991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.401005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.414376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.414391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.428458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.428473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.441240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.441254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.454113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.454127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.468228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.468247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.481425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.481440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.494904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.494919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.509105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.509121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.522109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.522123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.536288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.536303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.549322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.549337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.562825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.562840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.577220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.577235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.509 [2024-11-26 07:45:42.590351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.509 [2024-11-26 07:45:42.590365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.604242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.604257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.617512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.617527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.630446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.630460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.644736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.644750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.657868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.657882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.672771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.672786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.685950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.685965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.700205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.700219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.713269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.713284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.770 [2024-11-26 07:45:42.726760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.770 [2024-11-26 07:45:42.726779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.740693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.740708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.753781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.753795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.768292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.768306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.781363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.781378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.794346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.794360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.808302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.808317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.821455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.821469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.834739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.834753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.848615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.848630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:14.771 [2024-11-26 07:45:42.861491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:14.771 [2024-11-26 07:45:42.861506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.874102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.874116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.888122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.888136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.901021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.901035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.914104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.914118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.928499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.928514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.941797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.941811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.956169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.956183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.969280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.969295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.982489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.982504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:42.996688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:42.996703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.010179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:43.010193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.024435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:43.024450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.037759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:43.037773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.052276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:43.052290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.065474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.031 [2024-11-26 07:45:43.065488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.031 [2024-11-26 07:45:43.077970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.032 [2024-11-26 07:45:43.077984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.032 [2024-11-26 07:45:43.092451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.032 [2024-11-26 07:45:43.092465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.032 [2024-11-26 07:45:43.105655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.032 [2024-11-26 07:45:43.105669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.032 [2024-11-26 07:45:43.120534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.032 [2024-11-26 07:45:43.120549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.292 [2024-11-26 07:45:43.133760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.292 [2024-11-26 07:45:43.133774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.292 [2024-11-26 07:45:43.147835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.292 [2024-11-26 07:45:43.147850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.292 [2024-11-26 07:45:43.161104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.292 [2024-11-26 07:45:43.161119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.292 [2024-11-26 07:45:43.173896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.292 [2024-11-26 07:45:43.173910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.292 [2024-11-26 07:45:43.188336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.188350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.201499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.201513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 19043.00 IOPS, 148.77 MiB/s [2024-11-26T06:45:43.391Z] [2024-11-26 07:45:43.214408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.214422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.228316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.228330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.241370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.241385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.254370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.254384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.268544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.268558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.281718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.281732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.296168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.296182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.309264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.309278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.322003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.322017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.336614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.336628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.349683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.349697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.364445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.364460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.293 [2024-11-26 07:45:43.377445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.293 [2024-11-26 07:45:43.377459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.390195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.390210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.404481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.404495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.417586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.417599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.432157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.432174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.445187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.445202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.458672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.458686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.472010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.472024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.485016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.485035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.498367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.498382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.512438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.512453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.525497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.525512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.538748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.538762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.552277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.552292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.565374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.565388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.578511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.578525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.592304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.592318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.605216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.605230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.617945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.617959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.632391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.632405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.554 [2024-11-26 07:45:43.645430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.554 [2024-11-26 07:45:43.645445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.814 [2024-11-26 07:45:43.658445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.814 [2024-11-26 07:45:43.658459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.814 [2024-11-26 07:45:43.672368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.814 [2024-11-26 07:45:43.672382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.814 [2024-11-26 07:45:43.685286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.685300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.698142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.698156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.712302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.712317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.725623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.725637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.741097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.741115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.754480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.754494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.768668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.768682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.781831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.781845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.796398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.796413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.809715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.809729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.824606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.824620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.837742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.837756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.852585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.852599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.865761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.865775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.880578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.880592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:15.815 [2024-11-26 07:45:43.893851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:15.815 [2024-11-26 07:45:43.893865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.908813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.908828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.921691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.921704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.936331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.936345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.949325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.949340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.962121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.962135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.976400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.976414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:43.989473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:43.989488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.002303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.002322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.016563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.016577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.029568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.029582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.044081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.044096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.056852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.056867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.070400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.070416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.084538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.084553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.097713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.097726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.112482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.112496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.125579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.125594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.140163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.140178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.153451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.153466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.076 [2024-11-26 07:45:44.166179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.076 [2024-11-26 07:45:44.166193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.180579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.180593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.193485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.193500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.206887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.206901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 19037.25 IOPS, 148.73 MiB/s [2024-11-26T06:45:44.435Z] [2024-11-26 07:45:44.220587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.220602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.233662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.233676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.248491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.248506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.261363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.261378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.274260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.274274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.288133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.288148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.301033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.301048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.314295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.314309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.327886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.327900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.341109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.341124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.353991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.354005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.369125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.369139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.382206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.382221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.396348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.396362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.409140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.409155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.337 [2024-11-26 07:45:44.422615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.337 [2024-11-26 07:45:44.422629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.436687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.436702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.449675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.449690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.464318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.464333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.477105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.477120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.490300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.490315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.504199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.504214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.517451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.517466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.530455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.530470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.544195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.544210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.557189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.557205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.570662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.570677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.584649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.584664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.597760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.597774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.612304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.598 [2024-11-26 07:45:44.612318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.598 [2024-11-26 07:45:44.625410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.599 [2024-11-26 07:45:44.625424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.599 [2024-11-26 07:45:44.638179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.599 [2024-11-26 07:45:44.638193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.599 [2024-11-26 07:45:44.652503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.599 [2024-11-26 07:45:44.652518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.599 [2024-11-26 07:45:44.665710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.599 [2024-11-26 07:45:44.665724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.599 [2024-11-26 07:45:44.680381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.599 [2024-11-26 07:45:44.680396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.693418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.693434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.705688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.705702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.720103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.720118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.733143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.733163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.745975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.745989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.760704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.760719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.774063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.774078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.788377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.788391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.801447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.801462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.814231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.814245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.828589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.828604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.841615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.841628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.856406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.856420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.869549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.869562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.884578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.884593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.897602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.897616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.912322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.912336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.925648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.925662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:16.860 [2024-11-26 07:45:44.940406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:16.860 [2024-11-26 07:45:44.940421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:44.953322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:44.953336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:44.966069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:44.966082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:44.980509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:44.980523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:44.993321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:44.993335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.006091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.006105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.020128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.020142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.033029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.033045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.045633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.045648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.060810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.060825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.073655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.073669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.088411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.088426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.101350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.101365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.113995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.114009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.128621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.128635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.141542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.141555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.156202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.156216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.169270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.169284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.182004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.182018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.196873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.196887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.122 [2024-11-26 07:45:45.209669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.122 [2024-11-26 07:45:45.209683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.383 19038.60 IOPS, 148.74 MiB/s 00:36:17.383 Latency(us) 00:36:17.383 [2024-11-26T06:45:45.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:17.383 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:36:17.383 Nvme1n1 : 5.01 19041.95 148.77 0.00 0.00 6716.16 2676.05 11304.96 00:36:17.383 [2024-11-26T06:45:45.481Z] =================================================================================================================== 00:36:17.383 [2024-11-26T06:45:45.481Z] Total : 19041.95 148.77 0.00 0.00 6716.16 2676.05 11304.96 00:36:17.384 [2024-11-26 07:45:45.221249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.221263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.233249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.233266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.245253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.245265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.257251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.257264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.269248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.269259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.281245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.281255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.293243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.293252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.305247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.305256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 [2024-11-26 07:45:45.317244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:36:17.384 [2024-11-26 07:45:45.317252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:17.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1719885) - No such process 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1719885 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.384 delay0 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.384 07:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:36:17.644 [2024-11-26 07:45:45.524374] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:24.237 [2024-11-26 07:45:51.944144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83ff60 is same with the state(6) to be set 00:36:24.237 [2024-11-26 07:45:51.944183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83ff60 is same with the state(6) to be set 00:36:24.237 Initializing NVMe Controllers 00:36:24.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:24.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:24.237 Initialization complete. Launching workers. 00:36:24.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2288 00:36:24.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2566, failed to submit 42 00:36:24.237 success 2416, unsuccessful 150, failed 0 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.237 07:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.237 rmmod nvme_tcp 00:36:24.237 rmmod nvme_fabrics 00:36:24.237 rmmod nvme_keyring 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1717670 ']' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1717670 ']' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717670' 00:36:24.237 killing process with pid 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1717670 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.237 07:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:26.784 00:36:26.784 real 0m33.558s 00:36:26.784 user 0m43.021s 00:36:26.784 sys 0m11.999s 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:26.784 ************************************ 00:36:26.784 END TEST nvmf_zcopy 00:36:26.784 ************************************ 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:26.784 ************************************ 00:36:26.784 START TEST nvmf_nmic 00:36:26.784 ************************************ 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:36:26.784 * Looking for test storage... 00:36:26.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.784 --rc genhtml_branch_coverage=1 00:36:26.784 --rc genhtml_function_coverage=1 00:36:26.784 --rc genhtml_legend=1 00:36:26.784 --rc geninfo_all_blocks=1 00:36:26.784 --rc geninfo_unexecuted_blocks=1 00:36:26.784 00:36:26.784 ' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.784 --rc genhtml_branch_coverage=1 00:36:26.784 --rc genhtml_function_coverage=1 00:36:26.784 --rc genhtml_legend=1 00:36:26.784 --rc geninfo_all_blocks=1 00:36:26.784 --rc geninfo_unexecuted_blocks=1 00:36:26.784 00:36:26.784 ' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.784 --rc genhtml_branch_coverage=1 00:36:26.784 --rc genhtml_function_coverage=1 00:36:26.784 --rc genhtml_legend=1 00:36:26.784 --rc geninfo_all_blocks=1 00:36:26.784 --rc geninfo_unexecuted_blocks=1 00:36:26.784 00:36:26.784 ' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:26.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.784 --rc genhtml_branch_coverage=1 00:36:26.784 --rc genhtml_function_coverage=1 00:36:26.784 --rc genhtml_legend=1 00:36:26.784 --rc geninfo_all_blocks=1 00:36:26.784 --rc geninfo_unexecuted_blocks=1 00:36:26.784 00:36:26.784 ' 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.784 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:36:26.785 07:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:34.929 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:34.929 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.929 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:34.930 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:34.930 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:34.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:36:34.930 00:36:34.930 --- 10.0.0.2 ping statistics --- 00:36:34.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.930 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:36:34.930 00:36:34.930 --- 10.0.0.1 ping statistics --- 00:36:34.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.930 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1726230 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1726230 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1726230 ']' 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.930 07:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.930 [2024-11-26 07:46:01.937102] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:34.930 [2024-11-26 07:46:01.938247] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:36:34.930 [2024-11-26 07:46:01.938298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.930 [2024-11-26 07:46:02.039237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.930 [2024-11-26 07:46:02.094188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.930 [2024-11-26 07:46:02.094242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.930 [2024-11-26 07:46:02.094251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.930 [2024-11-26 07:46:02.094258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.930 [2024-11-26 07:46:02.094266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.930 [2024-11-26 07:46:02.096241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.930 [2024-11-26 07:46:02.096345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.930 [2024-11-26 07:46:02.096515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:34.930 [2024-11-26 07:46:02.096516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.930 [2024-11-26 07:46:02.172537] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:34.930 [2024-11-26 07:46:02.173805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:34.930 [2024-11-26 07:46:02.173870] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:34.930 [2024-11-26 07:46:02.174341] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:34.930 [2024-11-26 07:46:02.174431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:34.930 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:34.930 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:36:34.930 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:34.930 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:34.930 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 [2024-11-26 07:46:02.781625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 Malloc0 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 [2024-11-26 07:46:02.865888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:36:34.931 test case1: single bdev can't be used in multiple subsystems 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 [2024-11-26 07:46:02.901235] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:36:34.931 [2024-11-26 07:46:02.901255] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:36:34.931 [2024-11-26 07:46:02.901263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:36:34.931 request: 00:36:34.931 { 00:36:34.931 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:36:34.931 "namespace": { 00:36:34.931 "bdev_name": "Malloc0", 00:36:34.931 "no_auto_visible": false 00:36:34.931 }, 00:36:34.931 "method": "nvmf_subsystem_add_ns", 00:36:34.931 "req_id": 1 00:36:34.931 } 00:36:34.931 Got JSON-RPC error response 00:36:34.931 response: 00:36:34.931 { 00:36:34.931 "code": -32602, 00:36:34.931 "message": "Invalid parameters" 00:36:34.931 } 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:36:34.931 Adding namespace failed - expected result. 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:36:34.931 test case2: host connect to nvmf target in multiple paths 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:34.931 [2024-11-26 07:46:02.913349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.931 07:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:35.192 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:36:35.764 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:36:35.764 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:36:35.764 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:35.764 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:35.764 07:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:36:37.678 07:46:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:37.678 [global] 00:36:37.678 thread=1 00:36:37.678 invalidate=1 00:36:37.678 rw=write 00:36:37.678 time_based=1 00:36:37.678 runtime=1 00:36:37.678 ioengine=libaio 00:36:37.678 direct=1 00:36:37.678 bs=4096 00:36:37.678 iodepth=1 00:36:37.678 norandommap=0 00:36:37.678 numjobs=1 00:36:37.678 00:36:37.678 verify_dump=1 00:36:37.678 verify_backlog=512 00:36:37.678 verify_state_save=0 00:36:37.678 do_verify=1 00:36:37.678 verify=crc32c-intel 00:36:37.678 [job0] 00:36:37.678 filename=/dev/nvme0n1 00:36:37.959 Could not set queue depth (nvme0n1) 00:36:38.223 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:38.223 fio-3.35 00:36:38.223 Starting 1 thread 00:36:39.605 00:36:39.605 job0: (groupid=0, jobs=1): err= 0: pid=1727236: Tue Nov 26 07:46:07 2024 00:36:39.605 read: IOPS=16, BW=66.7KiB/s (68.3kB/s)(68.0KiB/1020msec) 00:36:39.605 slat (nsec): min=25996, max=26710, avg=26237.88, stdev=187.86 00:36:39.605 clat (usec): min=1129, max=42023, avg=39526.29, stdev=9895.76 00:36:39.605 lat (usec): min=1155, max=42049, avg=39552.53, stdev=9895.64 00:36:39.605 clat percentiles (usec): 00:36:39.605 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41681], 00:36:39.605 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:39.605 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:39.605 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:39.605 | 99.99th=[42206] 00:36:39.605 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:36:39.605 slat (usec): min=10, max=29248, avg=87.42, stdev=1291.33 00:36:39.605 clat (usec): min=236, max=788, avg=583.55, stdev=90.85 00:36:39.605 lat (usec): min=249, max=29971, avg=670.96, stdev=1300.93 00:36:39.605 clat percentiles (usec): 00:36:39.605 | 1.00th=[ 338], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 523], 00:36:39.605 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 611], 00:36:39.605 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 709], 00:36:39.605 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 791], 99.95th=[ 791], 00:36:39.605 | 99.99th=[ 791] 00:36:39.605 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:36:39.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:39.605 lat (usec) : 250=0.19%, 500=17.58%, 750=78.26%, 1000=0.76% 00:36:39.605 lat (msec) : 2=0.19%, 50=3.02% 00:36:39.605 cpu : usr=0.79%, sys=1.47%, ctx=531, majf=0, minf=1 00:36:39.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:39.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:39.605 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:39.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:39.605 00:36:39.605 Run status group 0 (all jobs): 00:36:39.605 READ: bw=66.7KiB/s (68.3kB/s), 66.7KiB/s-66.7KiB/s (68.3kB/s-68.3kB/s), io=68.0KiB (69.6kB), run=1020-1020msec 00:36:39.605 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:36:39.605 00:36:39.605 Disk stats (read/write): 00:36:39.605 nvme0n1: ios=39/512, merge=0/0, ticks=1508/299, in_queue=1807, util=98.80% 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:39.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.605 rmmod nvme_tcp 00:36:39.605 rmmod nvme_fabrics 00:36:39.605 rmmod nvme_keyring 00:36:39.605 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1726230 ']' 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1726230 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1726230 ']' 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1726230 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1726230 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1726230' 00:36:39.606 killing process with pid 1726230 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1726230 00:36:39.606 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1726230 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.867 07:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.786 00:36:41.786 real 0m15.469s 00:36:41.786 user 0m38.211s 00:36:41.786 sys 0m7.252s 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:36:41.786 ************************************ 00:36:41.786 END TEST nvmf_nmic 00:36:41.786 ************************************ 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.786 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:42.047 ************************************ 00:36:42.047 START TEST nvmf_fio_target 00:36:42.047 ************************************ 00:36:42.047 07:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:36:42.047 * Looking for test storage... 00:36:42.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.047 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:42.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.048 --rc genhtml_branch_coverage=1 00:36:42.048 --rc genhtml_function_coverage=1 00:36:42.048 --rc genhtml_legend=1 00:36:42.048 --rc geninfo_all_blocks=1 00:36:42.048 --rc geninfo_unexecuted_blocks=1 00:36:42.048 00:36:42.048 ' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:42.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.048 --rc genhtml_branch_coverage=1 00:36:42.048 --rc genhtml_function_coverage=1 00:36:42.048 --rc genhtml_legend=1 00:36:42.048 --rc geninfo_all_blocks=1 00:36:42.048 --rc geninfo_unexecuted_blocks=1 00:36:42.048 00:36:42.048 ' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:42.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.048 --rc genhtml_branch_coverage=1 00:36:42.048 --rc genhtml_function_coverage=1 00:36:42.048 --rc genhtml_legend=1 00:36:42.048 --rc geninfo_all_blocks=1 00:36:42.048 --rc geninfo_unexecuted_blocks=1 00:36:42.048 00:36:42.048 ' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:42.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.048 --rc genhtml_branch_coverage=1 00:36:42.048 --rc genhtml_function_coverage=1 00:36:42.048 --rc genhtml_legend=1 00:36:42.048 --rc geninfo_all_blocks=1 00:36:42.048 --rc geninfo_unexecuted_blocks=1 00:36:42.048 00:36:42.048 ' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.048 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.049 07:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:50.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:50.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:50.194 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:50.194 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.194 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:50.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:36:50.195 00:36:50.195 --- 10.0.0.2 ping statistics --- 00:36:50.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.195 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:36:50.195 00:36:50.195 --- 10.0.0.1 ping statistics --- 00:36:50.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.195 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1731762 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1731762 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1731762 ']' 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.195 07:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:50.195 [2024-11-26 07:46:17.549399] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:50.195 [2024-11-26 07:46:17.550547] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:36:50.195 [2024-11-26 07:46:17.550599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.195 [2024-11-26 07:46:17.652068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:50.195 [2024-11-26 07:46:17.705657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.195 [2024-11-26 07:46:17.705709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.195 [2024-11-26 07:46:17.705717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.195 [2024-11-26 07:46:17.705724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.195 [2024-11-26 07:46:17.705733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.195 [2024-11-26 07:46:17.707783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.195 [2024-11-26 07:46:17.707942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.195 [2024-11-26 07:46:17.708009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.195 [2024-11-26 07:46:17.708010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.195 [2024-11-26 07:46:17.785034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:50.195 [2024-11-26 07:46:17.786153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:50.195 [2024-11-26 07:46:17.786238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:50.195 [2024-11-26 07:46:17.786838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:50.196 [2024-11-26 07:46:17.786860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.457 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:50.718 [2024-11-26 07:46:18.577104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.718 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:50.979 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:36:50.979 07:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:50.979 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:36:50.979 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:51.239 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:36:51.239 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:51.501 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:36:51.501 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:36:51.763 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:51.763 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:36:51.763 07:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:52.025 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:36:52.025 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:52.286 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:36:52.286 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:36:52.547 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:52.547 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:52.547 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:52.806 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:36:52.806 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:53.066 07:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.066 [2024-11-26 07:46:21.125068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.327 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:36:53.327 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:36:53.586 07:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:36:54.159 07:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:36:56.073 07:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:36:56.073 [global] 00:36:56.073 thread=1 00:36:56.073 invalidate=1 00:36:56.073 rw=write 00:36:56.073 time_based=1 00:36:56.073 runtime=1 00:36:56.073 ioengine=libaio 00:36:56.073 direct=1 00:36:56.073 bs=4096 00:36:56.073 iodepth=1 00:36:56.073 norandommap=0 00:36:56.073 numjobs=1 00:36:56.073 00:36:56.073 verify_dump=1 00:36:56.073 verify_backlog=512 00:36:56.073 verify_state_save=0 00:36:56.073 do_verify=1 00:36:56.073 verify=crc32c-intel 00:36:56.073 [job0] 00:36:56.073 filename=/dev/nvme0n1 00:36:56.073 [job1] 00:36:56.073 filename=/dev/nvme0n2 00:36:56.073 [job2] 00:36:56.073 filename=/dev/nvme0n3 00:36:56.073 [job3] 00:36:56.073 filename=/dev/nvme0n4 00:36:56.073 Could not set queue depth (nvme0n1) 00:36:56.073 Could not set queue depth (nvme0n2) 00:36:56.073 Could not set queue depth (nvme0n3) 00:36:56.073 Could not set queue depth (nvme0n4) 00:36:56.643 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:56.643 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:56.643 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:56.643 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:56.643 fio-3.35 00:36:56.643 Starting 4 threads 00:36:58.026 00:36:58.026 job0: (groupid=0, jobs=1): err= 0: pid=1733156: Tue Nov 26 07:46:25 2024 00:36:58.026 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:58.026 slat (nsec): min=8168, max=46255, avg=26866.04, stdev=2676.60 00:36:58.026 clat (usec): min=766, max=41911, avg=1075.80, stdev=1809.80 00:36:58.026 lat (usec): min=794, max=41919, avg=1102.66, stdev=1808.97 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 938], 00:36:58.026 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:36:58.026 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:36:58.026 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[41681], 99.95th=[41681], 00:36:58.026 | 99.99th=[41681] 00:36:58.026 write: IOPS=689, BW=2757KiB/s (2823kB/s)(2760KiB/1001msec); 0 zone resets 00:36:58.026 slat (nsec): min=10063, max=67635, avg=32098.20, stdev=10279.36 00:36:58.026 clat (usec): min=227, max=1822, avg=582.21, stdev=134.33 00:36:58.026 lat (usec): min=238, max=1849, avg=614.31, stdev=137.17 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 306], 5.00th=[ 371], 10.00th=[ 400], 20.00th=[ 474], 00:36:58.026 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 586], 60.00th=[ 619], 00:36:58.026 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:36:58.026 | 99.00th=[ 881], 99.50th=[ 938], 99.90th=[ 1827], 99.95th=[ 1827], 00:36:58.026 | 99.99th=[ 1827] 00:36:58.026 bw ( KiB/s): min= 4096, max= 4096, per=42.23%, avg=4096.00, stdev= 0.00, samples=1 00:36:58.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:58.026 lat (usec) : 250=0.08%, 500=15.31%, 750=37.60%, 1000=27.54% 00:36:58.026 lat (msec) : 2=19.38%, 50=0.08% 00:36:58.026 cpu : usr=1.30%, sys=4.30%, ctx=1203, majf=0, minf=1 00:36:58.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 issued rwts: total=512,690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.026 job1: (groupid=0, jobs=1): err= 0: pid=1733170: Tue Nov 26 07:46:25 2024 00:36:58.026 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:58.026 slat (nsec): min=7137, max=55650, avg=27158.20, stdev=3320.45 00:36:58.026 clat (usec): min=730, max=1366, avg=965.87, stdev=64.84 00:36:58.026 lat (usec): min=758, max=1393, avg=993.02, stdev=65.27 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 775], 5.00th=[ 848], 10.00th=[ 898], 20.00th=[ 922], 00:36:58.026 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 979], 00:36:58.026 | 70.00th=[ 996], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:36:58.026 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1369], 99.95th=[ 1369], 00:36:58.026 | 99.99th=[ 1369] 00:36:58.026 write: IOPS=763, BW=3053KiB/s (3126kB/s)(3056KiB/1001msec); 0 zone resets 00:36:58.026 slat (nsec): min=9604, max=70910, avg=30632.38, stdev=10736.75 00:36:58.026 clat (usec): min=139, max=969, avg=598.69, stdev=128.67 00:36:58.026 lat (usec): min=152, max=1003, avg=629.33, stdev=133.32 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 273], 5.00th=[ 375], 10.00th=[ 433], 20.00th=[ 486], 00:36:58.026 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:36:58.026 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:36:58.026 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 971], 99.95th=[ 971], 00:36:58.026 | 99.99th=[ 971] 00:36:58.026 bw ( KiB/s): min= 4096, max= 4096, per=42.23%, avg=4096.00, stdev= 0.00, samples=1 00:36:58.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:58.026 lat (usec) : 250=0.39%, 500=13.01%, 750=39.58%, 1000=36.60% 00:36:58.026 lat (msec) : 2=10.42% 00:36:58.026 cpu : usr=2.70%, sys=4.80%, ctx=1278, majf=0, minf=1 00:36:58.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 issued rwts: total=512,764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.026 job2: (groupid=0, jobs=1): err= 0: pid=1733187: Tue Nov 26 07:46:25 2024 00:36:58.026 read: IOPS=307, BW=1231KiB/s (1260kB/s)(1232KiB/1001msec) 00:36:58.026 slat (nsec): min=7623, max=58046, avg=26205.02, stdev=7102.46 00:36:58.026 clat (usec): min=677, max=42020, avg=2092.00, stdev=6515.96 00:36:58.026 lat (usec): min=704, max=42046, avg=2118.21, stdev=6516.08 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 947], 00:36:58.026 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1045], 60.00th=[ 1074], 00:36:58.026 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:36:58.026 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:58.026 | 99.99th=[42206] 00:36:58.026 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:36:58.026 slat (nsec): min=9657, max=55014, avg=30900.37, stdev=10271.20 00:36:58.026 clat (usec): min=277, max=1239, avg=633.71, stdev=139.72 00:36:58.026 lat (usec): min=288, max=1292, avg=664.61, stdev=143.35 00:36:58.026 clat percentiles (usec): 00:36:58.026 | 1.00th=[ 343], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 515], 00:36:58.026 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 676], 00:36:58.026 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 857], 00:36:58.026 | 99.00th=[ 979], 99.50th=[ 1037], 99.90th=[ 1237], 99.95th=[ 1237], 00:36:58.026 | 99.99th=[ 1237] 00:36:58.026 bw ( KiB/s): min= 4096, max= 4096, per=42.23%, avg=4096.00, stdev= 0.00, samples=1 00:36:58.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:58.026 lat (usec) : 500=10.61%, 750=40.12%, 1000=22.68% 00:36:58.026 lat (msec) : 2=25.61%, 50=0.98% 00:36:58.026 cpu : usr=1.00%, sys=2.90%, ctx=821, majf=0, minf=1 00:36:58.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.026 issued rwts: total=308,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.026 job3: (groupid=0, jobs=1): err= 0: pid=1733194: Tue Nov 26 07:46:25 2024 00:36:58.027 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1022msec) 00:36:58.027 slat (nsec): min=28115, max=29176, avg=28441.82, stdev=266.75 00:36:58.027 clat (usec): min=1053, max=42013, avg=39506.92, stdev=9911.36 00:36:58.027 lat (usec): min=1081, max=42041, avg=39535.37, stdev=9911.37 00:36:58.027 clat percentiles (usec): 00:36:58.027 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[42206], 00:36:58.027 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:36:58.027 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:58.027 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:58.027 | 99.99th=[42206] 00:36:58.027 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:36:58.027 slat (nsec): min=9538, max=55625, avg=32396.03, stdev=10354.95 00:36:58.027 clat (usec): min=248, max=909, avg=639.13, stdev=120.40 00:36:58.027 lat (usec): min=260, max=946, avg=671.53, stdev=125.50 00:36:58.027 clat percentiles (usec): 00:36:58.027 | 1.00th=[ 351], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 545], 00:36:58.027 | 30.00th=[ 578], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:36:58.027 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 783], 95.00th=[ 816], 00:36:58.027 | 99.00th=[ 865], 99.50th=[ 881], 99.90th=[ 914], 99.95th=[ 914], 00:36:58.027 | 99.99th=[ 914] 00:36:58.027 bw ( KiB/s): min= 4096, max= 4096, per=42.23%, avg=4096.00, stdev= 0.00, samples=1 00:36:58.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:58.027 lat (usec) : 250=0.19%, 500=13.23%, 750=65.60%, 1000=17.77% 00:36:58.027 lat (msec) : 2=0.19%, 50=3.02% 00:36:58.027 cpu : usr=0.98%, sys=2.15%, ctx=531, majf=0, minf=1 00:36:58.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:58.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:58.027 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:58.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:58.027 00:36:58.027 Run status group 0 (all jobs): 00:36:58.027 READ: bw=5280KiB/s (5407kB/s), 66.5KiB/s-2046KiB/s (68.1kB/s-2095kB/s), io=5396KiB (5526kB), run=1001-1022msec 00:36:58.027 WRITE: bw=9699KiB/s (9931kB/s), 2004KiB/s-3053KiB/s (2052kB/s-3126kB/s), io=9912KiB (10.1MB), run=1001-1022msec 00:36:58.027 00:36:58.027 Disk stats (read/write): 00:36:58.027 nvme0n1: ios=503/512, merge=0/0, ticks=752/284, in_queue=1036, util=83.87% 00:36:58.027 nvme0n2: ios=551/512, merge=0/0, ticks=852/249, in_queue=1101, util=90.91% 00:36:58.027 nvme0n3: ios=207/512, merge=0/0, ticks=1013/290, in_queue=1303, util=91.86% 00:36:58.027 nvme0n4: ios=75/512, merge=0/0, ticks=1092/262, in_queue=1354, util=97.32% 00:36:58.027 07:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:36:58.027 [global] 00:36:58.027 thread=1 00:36:58.027 invalidate=1 00:36:58.027 rw=randwrite 00:36:58.027 time_based=1 00:36:58.027 runtime=1 00:36:58.027 ioengine=libaio 00:36:58.027 direct=1 00:36:58.027 bs=4096 00:36:58.027 iodepth=1 00:36:58.027 norandommap=0 00:36:58.027 numjobs=1 00:36:58.027 00:36:58.027 verify_dump=1 00:36:58.027 verify_backlog=512 00:36:58.027 verify_state_save=0 00:36:58.027 do_verify=1 00:36:58.027 verify=crc32c-intel 00:36:58.027 [job0] 00:36:58.027 filename=/dev/nvme0n1 00:36:58.027 [job1] 00:36:58.027 filename=/dev/nvme0n2 00:36:58.027 [job2] 00:36:58.027 filename=/dev/nvme0n3 00:36:58.027 [job3] 00:36:58.027 filename=/dev/nvme0n4 00:36:58.027 Could not set queue depth (nvme0n1) 00:36:58.027 Could not set queue depth (nvme0n2) 00:36:58.027 Could not set queue depth (nvme0n3) 00:36:58.027 Could not set queue depth (nvme0n4) 00:36:58.027 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.027 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.027 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.027 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:36:58.027 fio-3.35 00:36:58.027 Starting 4 threads 00:36:59.411 00:36:59.411 job0: (groupid=0, jobs=1): err= 0: pid=1733611: Tue Nov 26 07:46:27 2024 00:36:59.411 read: IOPS=19, BW=79.4KiB/s (81.3kB/s)(80.0KiB/1008msec) 00:36:59.411 slat (nsec): min=27899, max=28835, avg=28191.25, stdev=225.35 00:36:59.411 clat (usec): min=748, max=41444, avg=38974.27, stdev=8998.18 00:36:59.411 lat (usec): min=776, max=41473, avg=39002.46, stdev=8998.17 00:36:59.411 clat percentiles (usec): 00:36:59.411 | 1.00th=[ 750], 5.00th=[ 750], 10.00th=[41157], 20.00th=[41157], 00:36:59.411 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:59.411 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:59.411 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:59.411 | 99.99th=[41681] 00:36:59.411 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:36:59.411 slat (nsec): min=9031, max=72490, avg=31613.28, stdev=10308.72 00:36:59.411 clat (usec): min=137, max=738, avg=403.65, stdev=106.16 00:36:59.411 lat (usec): min=146, max=791, avg=435.26, stdev=109.76 00:36:59.411 clat percentiles (usec): 00:36:59.411 | 1.00th=[ 217], 5.00th=[ 258], 10.00th=[ 293], 20.00th=[ 314], 00:36:59.411 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 383], 60.00th=[ 424], 00:36:59.411 | 70.00th=[ 461], 80.00th=[ 502], 90.00th=[ 553], 95.00th=[ 594], 00:36:59.411 | 99.00th=[ 685], 99.50th=[ 709], 99.90th=[ 742], 99.95th=[ 742], 00:36:59.411 | 99.99th=[ 742] 00:36:59.411 bw ( KiB/s): min= 4096, max= 4096, per=47.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:59.411 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:59.411 lat (usec) : 250=4.14%, 500=72.74%, 750=19.55% 00:36:59.411 lat (msec) : 50=3.57% 00:36:59.411 cpu : usr=1.69%, sys=1.49%, ctx=534, majf=0, minf=1 00:36:59.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.411 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.411 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:59.411 job1: (groupid=0, jobs=1): err= 0: pid=1733622: Tue Nov 26 07:46:27 2024 00:36:59.411 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:36:59.411 slat (nsec): min=9692, max=61254, avg=25417.59, stdev=3828.68 00:36:59.411 clat (usec): min=824, max=1290, avg=1089.63, stdev=66.56 00:36:59.411 lat (usec): min=849, max=1315, avg=1115.05, stdev=67.00 00:36:59.411 clat percentiles (usec): 00:36:59.411 | 1.00th=[ 898], 5.00th=[ 971], 10.00th=[ 1004], 20.00th=[ 1045], 00:36:59.411 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1106], 00:36:59.411 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:36:59.411 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1287], 99.95th=[ 1287], 00:36:59.411 | 99.99th=[ 1287] 00:36:59.411 write: IOPS=662, BW=2649KiB/s (2713kB/s)(2652KiB/1001msec); 0 zone resets 00:36:59.411 slat (nsec): min=9235, max=48639, avg=26879.91, stdev=9041.80 00:36:59.411 clat (usec): min=150, max=990, avg=606.90, stdev=126.40 00:36:59.411 lat (usec): min=162, max=1021, avg=633.78, stdev=130.46 00:36:59.411 clat percentiles (usec): 00:36:59.411 | 1.00th=[ 258], 5.00th=[ 383], 10.00th=[ 441], 20.00th=[ 502], 00:36:59.412 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:36:59.412 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:36:59.412 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:36:59.412 | 99.99th=[ 988] 00:36:59.412 bw ( KiB/s): min= 4096, max= 4096, per=47.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:59.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:59.412 lat (usec) : 250=0.43%, 500=10.38%, 750=39.06%, 1000=10.89% 00:36:59.412 lat (msec) : 2=39.23% 00:36:59.412 cpu : usr=1.50%, sys=3.40%, ctx=1175, majf=0, minf=1 00:36:59.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 issued rwts: total=512,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:59.412 job2: (groupid=0, jobs=1): err= 0: pid=1733638: Tue Nov 26 07:46:27 2024 00:36:59.412 read: IOPS=16, BW=67.7KiB/s (69.4kB/s)(68.0KiB/1004msec) 00:36:59.412 slat (nsec): min=25964, max=31465, avg=26658.47, stdev=1313.14 00:36:59.412 clat (usec): min=1243, max=42096, avg=39561.46, stdev=9874.63 00:36:59.412 lat (usec): min=1269, max=42127, avg=39588.11, stdev=9874.73 00:36:59.412 clat percentiles (usec): 00:36:59.412 | 1.00th=[ 1237], 5.00th=[ 1237], 10.00th=[41681], 20.00th=[41681], 00:36:59.412 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:36:59.412 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:59.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:59.412 | 99.99th=[42206] 00:36:59.412 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:36:59.412 slat (nsec): min=9440, max=67899, avg=28898.59, stdev=8994.54 00:36:59.412 clat (usec): min=221, max=1210, avg=606.44, stdev=125.86 00:36:59.412 lat (usec): min=231, max=1226, avg=635.33, stdev=129.89 00:36:59.412 clat percentiles (usec): 00:36:59.412 | 1.00th=[ 297], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 498], 00:36:59.412 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:36:59.412 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:36:59.412 | 99.00th=[ 873], 99.50th=[ 930], 99.90th=[ 1205], 99.95th=[ 1205], 00:36:59.412 | 99.99th=[ 1205] 00:36:59.412 bw ( KiB/s): min= 4096, max= 4096, per=47.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:59.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:59.412 lat (usec) : 250=0.38%, 500=19.66%, 750=66.73%, 1000=9.83% 00:36:59.412 lat (msec) : 2=0.38%, 50=3.02% 00:36:59.412 cpu : usr=0.90%, sys=1.40%, ctx=531, majf=0, minf=1 00:36:59.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:59.412 job3: (groupid=0, jobs=1): err= 0: pid=1733645: Tue Nov 26 07:46:27 2024 00:36:59.412 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1017msec) 00:36:59.412 slat (nsec): min=14389, max=26604, avg=25785.40, stdev=2685.62 00:36:59.412 clat (usec): min=40859, max=41460, avg=40992.61, stdev=119.42 00:36:59.412 lat (usec): min=40886, max=41475, avg=41018.39, stdev=116.93 00:36:59.412 clat percentiles (usec): 00:36:59.412 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:36:59.412 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:59.412 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:59.412 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:36:59.412 | 99.99th=[41681] 00:36:59.412 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:36:59.412 slat (nsec): min=10785, max=61005, avg=25899.98, stdev=11755.34 00:36:59.412 clat (usec): min=130, max=729, avg=351.76, stdev=93.92 00:36:59.412 lat (usec): min=141, max=747, avg=377.66, stdev=93.22 00:36:59.412 clat percentiles (usec): 00:36:59.412 | 1.00th=[ 206], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 269], 00:36:59.412 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 367], 00:36:59.412 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 474], 95.00th=[ 529], 00:36:59.412 | 99.00th=[ 627], 99.50th=[ 644], 99.90th=[ 725], 99.95th=[ 725], 00:36:59.412 | 99.99th=[ 725] 00:36:59.412 bw ( KiB/s): min= 4096, max= 4096, per=47.36%, avg=4096.00, stdev= 0.00, samples=1 00:36:59.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:36:59.412 lat (usec) : 250=11.09%, 500=78.20%, 750=6.95% 00:36:59.412 lat (msec) : 50=3.76% 00:36:59.412 cpu : usr=0.79%, sys=1.18%, ctx=532, majf=0, minf=2 00:36:59.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.412 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:36:59.412 00:36:59.412 Run status group 0 (all jobs): 00:36:59.412 READ: bw=2238KiB/s (2292kB/s), 67.7KiB/s-2046KiB/s (69.4kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1017msec 00:36:59.412 WRITE: bw=8649KiB/s (8857kB/s), 2014KiB/s-2649KiB/s (2062kB/s-2713kB/s), io=8796KiB (9007kB), run=1001-1017msec 00:36:59.412 00:36:59.412 Disk stats (read/write): 00:36:59.412 nvme0n1: ios=70/512, merge=0/0, ticks=675/161, in_queue=836, util=87.37% 00:36:59.412 nvme0n2: ios=502/512, merge=0/0, ticks=574/305, in_queue=879, util=90.93% 00:36:59.412 nvme0n3: ios=70/512, merge=0/0, ticks=618/300, in_queue=918, util=95.26% 00:36:59.412 nvme0n4: ios=72/512, merge=0/0, ticks=706/168, in_queue=874, util=96.48% 00:36:59.412 07:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:36:59.412 [global] 00:36:59.412 thread=1 00:36:59.412 invalidate=1 00:36:59.412 rw=write 00:36:59.412 time_based=1 00:36:59.412 runtime=1 00:36:59.412 ioengine=libaio 00:36:59.412 direct=1 00:36:59.412 bs=4096 00:36:59.412 iodepth=128 00:36:59.412 norandommap=0 00:36:59.412 numjobs=1 00:36:59.412 00:36:59.412 verify_dump=1 00:36:59.412 verify_backlog=512 00:36:59.412 verify_state_save=0 00:36:59.412 do_verify=1 00:36:59.412 verify=crc32c-intel 00:36:59.412 [job0] 00:36:59.412 filename=/dev/nvme0n1 00:36:59.412 [job1] 00:36:59.412 filename=/dev/nvme0n2 00:36:59.412 [job2] 00:36:59.412 filename=/dev/nvme0n3 00:36:59.412 [job3] 00:36:59.412 filename=/dev/nvme0n4 00:36:59.412 Could not set queue depth (nvme0n1) 00:36:59.412 Could not set queue depth (nvme0n2) 00:36:59.412 Could not set queue depth (nvme0n3) 00:36:59.412 Could not set queue depth (nvme0n4) 00:36:59.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:59.672 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:59.672 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:59.672 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:59.672 fio-3.35 00:36:59.672 Starting 4 threads 00:37:01.056 00:37:01.056 job0: (groupid=0, jobs=1): err= 0: pid=1734098: Tue Nov 26 07:46:28 2024 00:37:01.056 read: IOPS=6021, BW=23.5MiB/s (24.7MB/s)(23.6MiB/1005msec) 00:37:01.056 slat (nsec): min=955, max=27816k, avg=80766.21, stdev=797835.05 00:37:01.056 clat (usec): min=929, max=132696, avg=10662.66, stdev=8867.80 00:37:01.056 lat (usec): min=936, max=132702, avg=10743.42, stdev=8917.03 00:37:01.056 clat percentiles (usec): 00:37:01.056 | 1.00th=[ 1926], 5.00th=[ 2507], 10.00th=[ 4752], 20.00th=[ 6063], 00:37:01.056 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7504], 60.00th=[ 8356], 00:37:01.056 | 70.00th=[ 9765], 80.00th=[ 13566], 90.00th=[ 24511], 95.00th=[ 31065], 00:37:01.056 | 99.00th=[ 38536], 99.50th=[ 39584], 99.90th=[ 46924], 99.95th=[130548], 00:37:01.056 | 99.99th=[132645] 00:37:01.056 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:37:01.056 slat (nsec): min=1627, max=14472k, avg=45787.94, stdev=428349.48 00:37:01.056 clat (usec): min=269, max=114490, avg=8201.18, stdev=10690.29 00:37:01.056 lat (usec): min=279, max=114499, avg=8246.97, stdev=10696.52 00:37:01.056 clat percentiles (usec): 00:37:01.056 | 1.00th=[ 1090], 5.00th=[ 1631], 10.00th=[ 2409], 20.00th=[ 3949], 00:37:01.056 | 30.00th=[ 4555], 40.00th=[ 5538], 50.00th=[ 5932], 60.00th=[ 6718], 00:37:01.056 | 70.00th=[ 8029], 80.00th=[ 9241], 90.00th=[ 12518], 95.00th=[ 15926], 00:37:01.056 | 99.00th=[ 74974], 99.50th=[ 86508], 99.90th=[114820], 99.95th=[114820], 00:37:01.056 | 99.99th=[114820] 00:37:01.056 bw ( KiB/s): min=23200, max=38240, per=30.43%, avg=30720.00, stdev=10634.89, samples=2 00:37:01.056 iops : min= 5800, max= 9560, avg=7680.00, stdev=2658.72, samples=2 00:37:01.056 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.31% 00:37:01.056 lat (msec) : 2=4.50%, 4=10.81%, 10=61.18%, 20=15.27%, 50=6.85% 00:37:01.056 lat (msec) : 100=0.87%, 250=0.16% 00:37:01.056 cpu : usr=5.88%, sys=6.67%, ctx=522, majf=0, minf=1 00:37:01.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:37:01.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.056 issued rwts: total=6052,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.056 job1: (groupid=0, jobs=1): err= 0: pid=1734102: Tue Nov 26 07:46:28 2024 00:37:01.056 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:37:01.056 slat (nsec): min=943, max=25974k, avg=77474.72, stdev=640084.20 00:37:01.056 clat (usec): min=4898, max=67962, avg=10019.12, stdev=6717.72 00:37:01.056 lat (usec): min=4903, max=67968, avg=10096.60, stdev=6764.87 00:37:01.056 clat percentiles (usec): 00:37:01.056 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7111], 00:37:01.056 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:37:01.056 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[14615], 95.00th=[21627], 00:37:01.056 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:37:01.056 | 99.99th=[67634] 00:37:01.056 write: IOPS=6860, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1002msec); 0 zone resets 00:37:01.056 slat (nsec): min=1606, max=40156k, avg=66676.72, stdev=597009.40 00:37:01.056 clat (usec): min=1765, max=47638, avg=8040.77, stdev=3688.63 00:37:01.056 lat (usec): min=2391, max=47687, avg=8107.45, stdev=3739.11 00:37:01.056 clat percentiles (usec): 00:37:01.056 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 6194], 00:37:01.056 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7504], 00:37:01.056 | 70.00th=[ 7963], 80.00th=[ 8586], 90.00th=[11338], 95.00th=[13829], 00:37:01.056 | 99.00th=[24773], 99.50th=[31851], 99.90th=[47449], 99.95th=[47449], 00:37:01.056 | 99.99th=[47449] 00:37:01.056 bw ( KiB/s): min=25480, max=28496, per=26.73%, avg=26988.00, stdev=2132.63, samples=2 00:37:01.056 iops : min= 6370, max= 7124, avg=6747.00, stdev=533.16, samples=2 00:37:01.056 lat (msec) : 2=0.01%, 4=0.24%, 10=83.95%, 20=11.96%, 50=3.37% 00:37:01.056 lat (msec) : 100=0.47% 00:37:01.056 cpu : usr=2.70%, sys=5.19%, ctx=796, majf=0, minf=1 00:37:01.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:37:01.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.056 issued rwts: total=6656,6874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.056 job2: (groupid=0, jobs=1): err= 0: pid=1734110: Tue Nov 26 07:46:28 2024 00:37:01.057 read: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec) 00:37:01.057 slat (nsec): min=1487, max=12919k, avg=81855.54, stdev=678387.54 00:37:01.057 clat (usec): min=2496, max=33496, avg=11944.30, stdev=5245.91 00:37:01.057 lat (usec): min=2502, max=33504, avg=12026.16, stdev=5291.97 00:37:01.057 clat percentiles (usec): 00:37:01.057 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6980], 20.00th=[ 7767], 00:37:01.057 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[12125], 00:37:01.057 | 70.00th=[14484], 80.00th=[16057], 90.00th=[19006], 95.00th=[21890], 00:37:01.057 | 99.00th=[30016], 99.50th=[33424], 99.90th=[33424], 99.95th=[33424], 00:37:01.057 | 99.99th=[33424] 00:37:01.057 write: IOPS=5738, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1009msec); 0 zone resets 00:37:01.057 slat (nsec): min=1897, max=12826k, avg=78229.06, stdev=688657.41 00:37:01.057 clat (usec): min=665, max=36346, avg=10414.08, stdev=5281.82 00:37:01.057 lat (usec): min=756, max=36354, avg=10492.31, stdev=5321.59 00:37:01.057 clat percentiles (usec): 00:37:01.057 | 1.00th=[ 1778], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6521], 00:37:01.057 | 30.00th=[ 7439], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10814], 00:37:01.057 | 70.00th=[11994], 80.00th=[13042], 90.00th=[16188], 95.00th=[21890], 00:37:01.057 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:37:01.057 | 99.99th=[36439] 00:37:01.057 bw ( KiB/s): min=20048, max=25248, per=22.44%, avg=22648.00, stdev=3676.96, samples=2 00:37:01.057 iops : min= 5012, max= 6312, avg=5662.00, stdev=919.24, samples=2 00:37:01.057 lat (usec) : 750=0.03%, 1000=0.07% 00:37:01.057 lat (msec) : 2=0.55%, 4=1.37%, 10=50.21%, 20=40.42%, 50=7.35% 00:37:01.057 cpu : usr=3.67%, sys=8.23%, ctx=242, majf=0, minf=1 00:37:01.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:37:01.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.057 issued rwts: total=5632,5790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.057 job3: (groupid=0, jobs=1): err= 0: pid=1734114: Tue Nov 26 07:46:28 2024 00:37:01.057 read: IOPS=4711, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:37:01.057 slat (nsec): min=973, max=16597k, avg=101539.42, stdev=802700.77 00:37:01.057 clat (usec): min=3750, max=36215, avg=13597.11, stdev=4912.79 00:37:01.057 lat (usec): min=3758, max=37137, avg=13698.65, stdev=4969.59 00:37:01.057 clat percentiles (usec): 00:37:01.057 | 1.00th=[ 6259], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 9372], 00:37:01.057 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12911], 60.00th=[14222], 00:37:01.057 | 70.00th=[15533], 80.00th=[17695], 90.00th=[20055], 95.00th=[22152], 00:37:01.057 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30802], 99.95th=[30802], 00:37:01.057 | 99.99th=[36439] 00:37:01.057 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:37:01.057 slat (nsec): min=1666, max=11633k, avg=95440.32, stdev=667693.97 00:37:01.057 clat (usec): min=1149, max=45924, avg=12194.44, stdev=6522.49 00:37:01.057 lat (usec): min=1179, max=45926, avg=12289.88, stdev=6563.96 00:37:01.057 clat percentiles (usec): 00:37:01.057 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 7832], 00:37:01.057 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10683], 60.00th=[11600], 00:37:01.057 | 70.00th=[12649], 80.00th=[15795], 90.00th=[19268], 95.00th=[26870], 00:37:01.057 | 99.00th=[40109], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:37:01.057 | 99.99th=[45876] 00:37:01.057 bw ( KiB/s): min=20464, max=20496, per=20.29%, avg=20480.00, stdev=22.63, samples=2 00:37:01.057 iops : min= 5116, max= 5124, avg=5120.00, stdev= 5.66, samples=2 00:37:01.057 lat (msec) : 2=0.10%, 4=0.29%, 10=34.73%, 20=55.31%, 50=9.56% 00:37:01.057 cpu : usr=3.49%, sys=5.58%, ctx=354, majf=0, minf=1 00:37:01.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:37:01.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:01.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:01.057 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:01.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:01.057 00:37:01.057 Run status group 0 (all jobs): 00:37:01.057 READ: bw=89.3MiB/s (93.7MB/s), 18.4MiB/s-25.9MiB/s (19.3MB/s-27.2MB/s), io=90.1MiB (94.5MB), run=1002-1009msec 00:37:01.057 WRITE: bw=98.6MiB/s (103MB/s), 19.9MiB/s-29.9MiB/s (20.9MB/s-31.3MB/s), io=99.5MiB (104MB), run=1002-1009msec 00:37:01.057 00:37:01.057 Disk stats (read/write): 00:37:01.057 nvme0n1: ios=4653/6174, merge=0/0, ticks=35137/40533, in_queue=75670, util=86.87% 00:37:01.057 nvme0n2: ios=5864/6144, merge=0/0, ticks=19672/17258, in_queue=36930, util=90.67% 00:37:01.057 nvme0n3: ios=5180/5179, merge=0/0, ticks=56223/47330, in_queue=103553, util=93.67% 00:37:01.057 nvme0n4: ios=4153/4287, merge=0/0, ticks=48397/44145, in_queue=92542, util=93.82% 00:37:01.057 07:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:01.057 [global] 00:37:01.057 thread=1 00:37:01.057 invalidate=1 00:37:01.057 rw=randwrite 00:37:01.057 time_based=1 00:37:01.057 runtime=1 00:37:01.057 ioengine=libaio 00:37:01.057 direct=1 00:37:01.057 bs=4096 00:37:01.057 iodepth=128 00:37:01.057 norandommap=0 00:37:01.057 numjobs=1 00:37:01.057 00:37:01.057 verify_dump=1 00:37:01.057 verify_backlog=512 00:37:01.057 verify_state_save=0 00:37:01.057 do_verify=1 00:37:01.057 verify=crc32c-intel 00:37:01.057 [job0] 00:37:01.057 filename=/dev/nvme0n1 00:37:01.057 [job1] 00:37:01.057 filename=/dev/nvme0n2 00:37:01.057 [job2] 00:37:01.057 filename=/dev/nvme0n3 00:37:01.057 [job3] 00:37:01.057 filename=/dev/nvme0n4 00:37:01.057 Could not set queue depth (nvme0n1) 00:37:01.057 Could not set queue depth (nvme0n2) 00:37:01.057 Could not set queue depth (nvme0n3) 00:37:01.057 Could not set queue depth (nvme0n4) 00:37:01.317 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.317 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.317 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.317 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:01.317 fio-3.35 00:37:01.317 Starting 4 threads 00:37:02.700 00:37:02.700 job0: (groupid=0, jobs=1): err= 0: pid=1734601: Tue Nov 26 07:46:30 2024 00:37:02.700 read: IOPS=5815, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1003msec) 00:37:02.700 slat (nsec): min=938, max=13991k, avg=74291.74, stdev=523754.92 00:37:02.700 clat (usec): min=1958, max=34047, avg=9392.97, stdev=4460.82 00:37:02.700 lat (usec): min=1968, max=34055, avg=9467.26, stdev=4500.23 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 2474], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 6587], 00:37:02.700 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9110], 00:37:02.700 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[14353], 95.00th=[17957], 00:37:02.700 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30802], 99.95th=[33817], 00:37:02.700 | 99.99th=[33817] 00:37:02.700 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:37:02.700 slat (nsec): min=1585, max=14398k, avg=83927.70, stdev=517944.18 00:37:02.700 clat (usec): min=808, max=34005, avg=11643.07, stdev=7136.35 00:37:02.700 lat (usec): min=816, max=34007, avg=11727.00, stdev=7184.94 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 2868], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6325], 00:37:02.700 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8717], 60.00th=[ 9634], 00:37:02.700 | 70.00th=[12518], 80.00th=[19006], 90.00th=[23987], 95.00th=[27657], 00:37:02.700 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[31851], 00:37:02.700 | 99.99th=[33817] 00:37:02.700 bw ( KiB/s): min=23800, max=25352, per=28.07%, avg=24576.00, stdev=1097.43, samples=2 00:37:02.700 iops : min= 5950, max= 6338, avg=6144.00, stdev=274.36, samples=2 00:37:02.700 lat (usec) : 1000=0.03% 00:37:02.700 lat (msec) : 2=0.28%, 4=3.62%, 10=61.83%, 20=22.77%, 50=11.48% 00:37:02.700 cpu : usr=3.19%, sys=6.89%, ctx=466, majf=0, minf=1 00:37:02.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.700 issued rwts: total=5833,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.700 job1: (groupid=0, jobs=1): err= 0: pid=1734606: Tue Nov 26 07:46:30 2024 00:37:02.700 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:37:02.700 slat (nsec): min=976, max=17765k, avg=102571.74, stdev=774983.38 00:37:02.700 clat (usec): min=6339, max=52841, avg=13372.02, stdev=8143.50 00:37:02.700 lat (usec): min=6345, max=52865, avg=13474.59, stdev=8218.74 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 6783], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 7963], 00:37:02.700 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:37:02.700 | 70.00th=[13173], 80.00th=[18744], 90.00th=[25297], 95.00th=[33817], 00:37:02.700 | 99.00th=[42730], 99.50th=[47449], 99.90th=[47449], 99.95th=[50594], 00:37:02.700 | 99.99th=[52691] 00:37:02.700 write: IOPS=3672, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec); 0 zone resets 00:37:02.700 slat (nsec): min=1552, max=18911k, avg=166031.29, stdev=995398.43 00:37:02.700 clat (usec): min=1234, max=64007, avg=21495.55, stdev=15588.98 00:37:02.700 lat (usec): min=3674, max=64017, avg=21661.58, stdev=15711.46 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 4752], 5.00th=[ 6783], 10.00th=[ 7373], 20.00th=[ 8455], 00:37:02.700 | 30.00th=[ 9110], 40.00th=[13698], 50.00th=[16581], 60.00th=[18744], 00:37:02.700 | 70.00th=[24511], 80.00th=[33162], 90.00th=[52691], 95.00th=[55313], 00:37:02.700 | 99.00th=[58983], 99.50th=[60031], 99.90th=[64226], 99.95th=[64226], 00:37:02.700 | 99.99th=[64226] 00:37:02.700 bw ( KiB/s): min=12336, max=16384, per=16.40%, avg=14360.00, stdev=2862.37, samples=2 00:37:02.700 iops : min= 3084, max= 4096, avg=3590.00, stdev=715.59, samples=2 00:37:02.700 lat (msec) : 2=0.01%, 4=0.08%, 10=43.13%, 20=32.28%, 50=18.69% 00:37:02.700 lat (msec) : 100=5.80% 00:37:02.700 cpu : usr=3.59%, sys=3.79%, ctx=275, majf=0, minf=2 00:37:02.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.700 issued rwts: total=3584,3687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.700 job2: (groupid=0, jobs=1): err= 0: pid=1734608: Tue Nov 26 07:46:30 2024 00:37:02.700 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:37:02.700 slat (nsec): min=951, max=33673k, avg=83340.57, stdev=716185.59 00:37:02.700 clat (usec): min=3933, max=43926, avg=10533.96, stdev=5093.44 00:37:02.700 lat (usec): min=3936, max=43935, avg=10617.30, stdev=5144.71 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 4424], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7701], 00:37:02.700 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9241], 00:37:02.700 | 70.00th=[10552], 80.00th=[11600], 90.00th=[17695], 95.00th=[22414], 00:37:02.700 | 99.00th=[32375], 99.50th=[39584], 99.90th=[39584], 99.95th=[43779], 00:37:02.700 | 99.99th=[43779] 00:37:02.700 write: IOPS=6159, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1004msec); 0 zone resets 00:37:02.700 slat (nsec): min=1631, max=11534k, avg=73642.88, stdev=533971.99 00:37:02.700 clat (usec): min=1237, max=49617, avg=10120.55, stdev=5620.22 00:37:02.700 lat (usec): min=1249, max=49627, avg=10194.19, stdev=5661.07 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 4686], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:37:02.700 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8225], 00:37:02.700 | 70.00th=[ 9503], 80.00th=[13304], 90.00th=[16188], 95.00th=[19792], 00:37:02.700 | 99.00th=[43779], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:37:02.700 | 99.99th=[49546] 00:37:02.700 bw ( KiB/s): min=20592, max=28560, per=28.07%, avg=24576.00, stdev=5634.23, samples=2 00:37:02.700 iops : min= 5148, max= 7140, avg=6144.00, stdev=1408.56, samples=2 00:37:02.700 lat (msec) : 2=0.02%, 4=0.48%, 10=68.81%, 20=25.68%, 50=5.00% 00:37:02.700 cpu : usr=4.59%, sys=6.28%, ctx=432, majf=0, minf=1 00:37:02.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.700 issued rwts: total=6144,6184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.700 job3: (groupid=0, jobs=1): err= 0: pid=1734612: Tue Nov 26 07:46:30 2024 00:37:02.700 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:37:02.700 slat (nsec): min=998, max=11843k, avg=82762.21, stdev=613357.83 00:37:02.700 clat (usec): min=3339, max=33457, avg=10210.93, stdev=4043.37 00:37:02.700 lat (usec): min=3348, max=33459, avg=10293.69, stdev=4092.54 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 5407], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7308], 00:37:02.700 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8979], 60.00th=[ 9765], 00:37:02.700 | 70.00th=[11076], 80.00th=[12780], 90.00th=[15926], 95.00th=[17695], 00:37:02.700 | 99.00th=[25035], 99.50th=[29492], 99.90th=[32637], 99.95th=[33424], 00:37:02.700 | 99.99th=[33424] 00:37:02.700 write: IOPS=5998, BW=23.4MiB/s (24.6MB/s)(23.6MiB/1008msec); 0 zone resets 00:37:02.700 slat (nsec): min=1615, max=11176k, avg=83143.46, stdev=494387.24 00:37:02.700 clat (usec): min=1175, max=35652, avg=11643.37, stdev=7293.49 00:37:02.700 lat (usec): min=1190, max=35656, avg=11726.51, stdev=7340.44 00:37:02.700 clat percentiles (usec): 00:37:02.700 | 1.00th=[ 3720], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 6587], 00:37:02.700 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 8848], 60.00th=[ 9896], 00:37:02.700 | 70.00th=[12125], 80.00th=[17957], 90.00th=[23987], 95.00th=[27657], 00:37:02.700 | 99.00th=[32900], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:37:02.700 | 99.99th=[35914] 00:37:02.700 bw ( KiB/s): min=21168, max=26176, per=27.04%, avg=23672.00, stdev=3541.19, samples=2 00:37:02.700 iops : min= 5292, max= 6544, avg=5918.00, stdev=885.30, samples=2 00:37:02.700 lat (msec) : 2=0.02%, 4=1.46%, 10=59.25%, 20=29.43%, 50=9.84% 00:37:02.700 cpu : usr=3.77%, sys=7.05%, ctx=400, majf=0, minf=2 00:37:02.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:02.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:02.700 issued rwts: total=5632,6046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:02.700 00:37:02.700 Run status group 0 (all jobs): 00:37:02.700 READ: bw=82.1MiB/s (86.1MB/s), 13.9MiB/s-23.9MiB/s (14.6MB/s-25.1MB/s), io=82.8MiB (86.8MB), run=1003-1008msec 00:37:02.700 WRITE: bw=85.5MiB/s (89.6MB/s), 14.3MiB/s-24.1MiB/s (15.0MB/s-25.2MB/s), io=86.2MiB (90.4MB), run=1003-1008msec 00:37:02.700 00:37:02.700 Disk stats (read/write): 00:37:02.701 nvme0n1: ios=5149/5175, merge=0/0, ticks=39949/45812, in_queue=85761, util=96.49% 00:37:02.701 nvme0n2: ios=3107/3247, merge=0/0, ticks=20324/29781, in_queue=50105, util=87.16% 00:37:02.701 nvme0n3: ios=4733/5120, merge=0/0, ticks=26576/23940, in_queue=50516, util=88.40% 00:37:02.701 nvme0n4: ios=4629/4810, merge=0/0, ticks=47649/54702, in_queue=102351, util=90.92% 00:37:02.701 07:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:02.701 07:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1734937 00:37:02.701 07:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:02.701 07:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:02.701 [global] 00:37:02.701 thread=1 00:37:02.701 invalidate=1 00:37:02.701 rw=read 00:37:02.701 time_based=1 00:37:02.701 runtime=10 00:37:02.701 ioengine=libaio 00:37:02.701 direct=1 00:37:02.701 bs=4096 00:37:02.701 iodepth=1 00:37:02.701 norandommap=1 00:37:02.701 numjobs=1 00:37:02.701 00:37:02.701 [job0] 00:37:02.701 filename=/dev/nvme0n1 00:37:02.701 [job1] 00:37:02.701 filename=/dev/nvme0n2 00:37:02.701 [job2] 00:37:02.701 filename=/dev/nvme0n3 00:37:02.701 [job3] 00:37:02.701 filename=/dev/nvme0n4 00:37:02.701 Could not set queue depth (nvme0n1) 00:37:02.701 Could not set queue depth (nvme0n2) 00:37:02.701 Could not set queue depth (nvme0n3) 00:37:02.701 Could not set queue depth (nvme0n4) 00:37:02.959 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.959 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.959 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.959 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:02.959 fio-3.35 00:37:02.959 Starting 4 threads 00:37:06.260 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:06.260 07:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:06.260 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=561152, buflen=4096 00:37:06.260 fio: pid=1735124, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:06.260 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=3674112, buflen=4096 00:37:06.260 fio: pid=1735123, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:06.260 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.260 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:06.260 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8675328, buflen=4096 00:37:06.260 fio: pid=1735121, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:06.260 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.260 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:06.614 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5971968, buflen=4096 00:37:06.614 fio: pid=1735122, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:06.615 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.615 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:06.615 00:37:06.615 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735121: Tue Nov 26 07:46:34 2024 00:37:06.615 read: IOPS=712, BW=2847KiB/s (2915kB/s)(8472KiB/2976msec) 00:37:06.615 slat (usec): min=6, max=28241, avg=43.44, stdev=642.24 00:37:06.615 clat (usec): min=574, max=42156, avg=1344.92, stdev=3160.78 00:37:06.615 lat (usec): min=599, max=42181, avg=1388.37, stdev=3225.01 00:37:06.615 clat percentiles (usec): 00:37:06.615 | 1.00th=[ 816], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1037], 00:37:06.615 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:37:06.615 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1237], 00:37:06.615 | 99.00th=[ 1352], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:37:06.615 | 99.99th=[42206] 00:37:06.615 bw ( KiB/s): min= 2560, max= 3624, per=54.79%, avg=3192.00, stdev=473.52, samples=5 00:37:06.615 iops : min= 640, max= 906, avg=798.00, stdev=118.38, samples=5 00:37:06.615 lat (usec) : 750=0.33%, 1000=12.88% 00:37:06.615 lat (msec) : 2=86.08%, 10=0.05%, 50=0.61% 00:37:06.615 cpu : usr=0.81%, sys=2.15%, ctx=2123, majf=0, minf=1 00:37:06.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.615 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735122: Tue Nov 26 07:46:34 2024 00:37:06.615 read: IOPS=460, BW=1843KiB/s (1887kB/s)(5832KiB/3165msec) 00:37:06.615 slat (usec): min=5, max=20459, avg=74.26, stdev=839.53 00:37:06.615 clat (usec): min=448, max=41913, avg=2070.90, stdev=7008.49 00:37:06.615 lat (usec): min=456, max=41939, avg=2145.19, stdev=7050.07 00:37:06.615 clat percentiles (usec): 00:37:06.615 | 1.00th=[ 570], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 717], 00:37:06.615 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:37:06.615 | 70.00th=[ 840], 80.00th=[ 955], 90.00th=[ 1012], 95.00th=[ 1106], 00:37:06.615 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:37:06.615 | 99.99th=[41681] 00:37:06.615 bw ( KiB/s): min= 96, max= 4584, per=30.23%, avg=1761.67, stdev=2053.81, samples=6 00:37:06.615 iops : min= 24, max= 1146, avg=440.33, stdev=513.33, samples=6 00:37:06.615 lat (usec) : 500=0.07%, 750=28.65%, 1000=59.56% 00:37:06.615 lat (msec) : 2=8.50%, 50=3.15% 00:37:06.615 cpu : usr=0.73%, sys=1.20%, ctx=1466, majf=0, minf=2 00:37:06.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 issued rwts: total=1459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.615 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735123: Tue Nov 26 07:46:34 2024 00:37:06.615 read: IOPS=322, BW=1287KiB/s (1318kB/s)(3588KiB/2787msec) 00:37:06.615 slat (usec): min=7, max=17598, avg=62.87, stdev=766.93 00:37:06.615 clat (usec): min=309, max=42137, avg=3008.19, stdev=8704.20 00:37:06.615 lat (usec): min=317, max=42165, avg=3071.11, stdev=8729.94 00:37:06.615 clat percentiles (usec): 00:37:06.615 | 1.00th=[ 586], 5.00th=[ 742], 10.00th=[ 832], 20.00th=[ 971], 00:37:06.615 | 30.00th=[ 1012], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1074], 00:37:06.615 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[ 6390], 00:37:06.615 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:06.615 | 99.99th=[42206] 00:37:06.615 bw ( KiB/s): min= 896, max= 1440, per=19.55%, avg=1139.20, stdev=208.66, samples=5 00:37:06.615 iops : min= 224, max= 360, avg=284.80, stdev=52.17, samples=5 00:37:06.615 lat (usec) : 500=0.33%, 750=5.01%, 1000=21.05% 00:37:06.615 lat (msec) : 2=68.49%, 10=0.11%, 50=4.90% 00:37:06.615 cpu : usr=0.72%, sys=1.11%, ctx=900, majf=0, minf=2 00:37:06.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 issued rwts: total=898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.615 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1735124: Tue Nov 26 07:46:34 2024 00:37:06.615 read: IOPS=52, BW=209KiB/s (214kB/s)(548KiB/2627msec) 00:37:06.615 slat (nsec): min=11092, max=52676, avg=27479.36, stdev=5409.27 00:37:06.615 clat (usec): min=582, max=42142, avg=18979.27, stdev=20343.33 00:37:06.615 lat (usec): min=609, max=42153, avg=19006.76, stdev=20342.50 00:37:06.615 clat percentiles (usec): 00:37:06.615 | 1.00th=[ 627], 5.00th=[ 701], 10.00th=[ 725], 20.00th=[ 791], 00:37:06.615 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[ 955], 60.00th=[41157], 00:37:06.615 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:37:06.615 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:06.615 | 99.99th=[42206] 00:37:06.615 bw ( KiB/s): min= 96, max= 384, per=3.67%, avg=214.40, stdev=142.03, samples=5 00:37:06.615 iops : min= 24, max= 96, avg=53.60, stdev=35.51, samples=5 00:37:06.615 lat (usec) : 750=13.04%, 1000=39.86% 00:37:06.615 lat (msec) : 2=2.17%, 50=44.20% 00:37:06.615 cpu : usr=0.00%, sys=0.19%, ctx=138, majf=0, minf=2 00:37:06.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:06.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.615 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:06.615 00:37:06.615 Run status group 0 (all jobs): 00:37:06.615 READ: bw=5826KiB/s (5966kB/s), 209KiB/s-2847KiB/s (214kB/s-2915kB/s), io=18.0MiB (18.9MB), run=2627-3165msec 00:37:06.615 00:37:06.615 Disk stats (read/write): 00:37:06.615 nvme0n1: ios=2091/0, merge=0/0, ticks=2681/0, in_queue=2681, util=93.56% 00:37:06.615 nvme0n2: ios=1408/0, merge=0/0, ticks=2914/0, in_queue=2914, util=93.46% 00:37:06.615 nvme0n3: ios=736/0, merge=0/0, ticks=2481/0, in_queue=2481, util=95.99% 00:37:06.615 nvme0n4: ios=136/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.46% 00:37:06.615 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.615 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:06.984 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.984 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:06.984 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:06.984 07:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1734937 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:37:07.265 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:07.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:37:07.527 nvmf hotplug test: fio failed as expected 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:07.527 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:07.527 rmmod nvme_tcp 00:37:07.788 rmmod nvme_fabrics 00:37:07.788 rmmod nvme_keyring 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1731762 ']' 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1731762 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1731762 ']' 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1731762 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731762 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731762' 00:37:07.788 killing process with pid 1731762 00:37:07.788 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1731762 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1731762 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:07.789 07:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:10.338 00:37:10.338 real 0m28.024s 00:37:10.338 user 2m13.080s 00:37:10.338 sys 0m11.910s 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:10.338 ************************************ 00:37:10.338 END TEST nvmf_fio_target 00:37:10.338 ************************************ 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.338 07:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:10.338 ************************************ 00:37:10.338 START TEST nvmf_bdevio 00:37:10.338 ************************************ 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:37:10.338 * Looking for test storage... 00:37:10.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.338 --rc genhtml_branch_coverage=1 00:37:10.338 --rc genhtml_function_coverage=1 00:37:10.338 --rc genhtml_legend=1 00:37:10.338 --rc geninfo_all_blocks=1 00:37:10.338 --rc geninfo_unexecuted_blocks=1 00:37:10.338 00:37:10.338 ' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.338 --rc genhtml_branch_coverage=1 00:37:10.338 --rc genhtml_function_coverage=1 00:37:10.338 --rc genhtml_legend=1 00:37:10.338 --rc geninfo_all_blocks=1 00:37:10.338 --rc geninfo_unexecuted_blocks=1 00:37:10.338 00:37:10.338 ' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.338 --rc genhtml_branch_coverage=1 00:37:10.338 --rc genhtml_function_coverage=1 00:37:10.338 --rc genhtml_legend=1 00:37:10.338 --rc geninfo_all_blocks=1 00:37:10.338 --rc geninfo_unexecuted_blocks=1 00:37:10.338 00:37:10.338 ' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:10.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.338 --rc genhtml_branch_coverage=1 00:37:10.338 --rc genhtml_function_coverage=1 00:37:10.338 --rc genhtml_legend=1 00:37:10.338 --rc geninfo_all_blocks=1 00:37:10.338 --rc geninfo_unexecuted_blocks=1 00:37:10.338 00:37:10.338 ' 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.338 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.339 07:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:18.481 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:18.481 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.481 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:18.481 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:18.482 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:37:18.482 00:37:18.482 --- 10.0.0.2 ping statistics --- 00:37:18.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.482 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:37:18.482 00:37:18.482 --- 10.0.0.1 ping statistics --- 00:37:18.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.482 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1740157 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1740157 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1740157 ']' 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:18.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:18.482 07:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.482 [2024-11-26 07:46:45.811852] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:18.482 [2024-11-26 07:46:45.812967] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:37:18.482 [2024-11-26 07:46:45.813018] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.482 [2024-11-26 07:46:45.911484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:18.482 [2024-11-26 07:46:45.964122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.482 [2024-11-26 07:46:45.964183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.482 [2024-11-26 07:46:45.964192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.482 [2024-11-26 07:46:45.964199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.482 [2024-11-26 07:46:45.964206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.482 [2024-11-26 07:46:45.966236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:18.482 [2024-11-26 07:46:45.966457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:18.482 [2024-11-26 07:46:45.966616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:18.482 [2024-11-26 07:46:45.966616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:18.482 [2024-11-26 07:46:46.043314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:18.482 [2024-11-26 07:46:46.044777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:18.482 [2024-11-26 07:46:46.044794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:18.482 [2024-11-26 07:46:46.045293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:18.482 [2024-11-26 07:46:46.045353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 [2024-11-26 07:46:46.667584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 Malloc0 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:18.743 [2024-11-26 07:46:46.759864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:37:18.743 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:18.744 { 00:37:18.744 "params": { 00:37:18.744 "name": "Nvme$subsystem", 00:37:18.744 "trtype": "$TEST_TRANSPORT", 00:37:18.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:18.744 "adrfam": "ipv4", 00:37:18.744 "trsvcid": "$NVMF_PORT", 00:37:18.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:18.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:18.744 "hdgst": ${hdgst:-false}, 00:37:18.744 "ddgst": ${ddgst:-false} 00:37:18.744 }, 00:37:18.744 "method": "bdev_nvme_attach_controller" 00:37:18.744 } 00:37:18.744 EOF 00:37:18.744 )") 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:37:18.744 07:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:18.744 "params": { 00:37:18.744 "name": "Nvme1", 00:37:18.744 "trtype": "tcp", 00:37:18.744 "traddr": "10.0.0.2", 00:37:18.744 "adrfam": "ipv4", 00:37:18.744 "trsvcid": "4420", 00:37:18.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:18.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:18.744 "hdgst": false, 00:37:18.744 "ddgst": false 00:37:18.744 }, 00:37:18.744 "method": "bdev_nvme_attach_controller" 00:37:18.744 }' 00:37:18.744 [2024-11-26 07:46:46.823741] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:37:18.744 [2024-11-26 07:46:46.823796] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740508 ] 00:37:19.005 [2024-11-26 07:46:46.912597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:19.005 [2024-11-26 07:46:46.952107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:19.005 [2024-11-26 07:46:46.952258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:19.005 [2024-11-26 07:46:46.952403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:19.265 I/O targets: 00:37:19.265 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:37:19.265 00:37:19.265 00:37:19.265 CUnit - A unit testing framework for C - Version 2.1-3 00:37:19.265 http://cunit.sourceforge.net/ 00:37:19.265 00:37:19.265 00:37:19.265 Suite: bdevio tests on: Nvme1n1 00:37:19.265 Test: blockdev write read block ...passed 00:37:19.265 Test: blockdev write zeroes read block ...passed 00:37:19.265 Test: blockdev write zeroes read no split ...passed 00:37:19.265 Test: blockdev write zeroes read split ...passed 00:37:19.265 Test: blockdev write zeroes read split partial ...passed 00:37:19.265 Test: blockdev reset ...[2024-11-26 07:46:47.295707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:37:19.265 [2024-11-26 07:46:47.295781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2039970 (9): Bad file descriptor 00:37:19.525 [2024-11-26 07:46:47.391712] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:37:19.525 passed 00:37:19.525 Test: blockdev write read 8 blocks ...passed 00:37:19.525 Test: blockdev write read size > 128k ...passed 00:37:19.525 Test: blockdev write read invalid size ...passed 00:37:19.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:19.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:19.525 Test: blockdev write read max offset ...passed 00:37:19.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:19.525 Test: blockdev writev readv 8 blocks ...passed 00:37:19.525 Test: blockdev writev readv 30 x 1block ...passed 00:37:19.525 Test: blockdev writev readv block ...passed 00:37:19.525 Test: blockdev writev readv size > 128k ...passed 00:37:19.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:19.525 Test: blockdev comparev and writev ...[2024-11-26 07:46:47.616836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.525 [2024-11-26 07:46:47.616868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:19.525 [2024-11-26 07:46:47.616884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.525 [2024-11-26 07:46:47.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:19.526 [2024-11-26 07:46:47.617459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.526 [2024-11-26 07:46:47.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:19.526 [2024-11-26 07:46:47.617486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.526 [2024-11-26 07:46:47.617494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:37:19.785 [2024-11-26 07:46:47.618040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.785 [2024-11-26 07:46:47.618052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.618071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.786 [2024-11-26 07:46:47.618079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.618612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.786 [2024-11-26 07:46:47.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.618637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:37:19.786 [2024-11-26 07:46:47.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:19.786 passed 00:37:19.786 Test: blockdev nvme passthru rw ...passed 00:37:19.786 Test: blockdev nvme passthru vendor specific ...[2024-11-26 07:46:47.702966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:19.786 [2024-11-26 07:46:47.702981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.703330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:19.786 [2024-11-26 07:46:47.703341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.703712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:19.786 [2024-11-26 07:46:47.703722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:19.786 [2024-11-26 07:46:47.704085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:37:19.786 [2024-11-26 07:46:47.704095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:19.786 passed 00:37:19.786 Test: blockdev nvme admin passthru ...passed 00:37:19.786 Test: blockdev copy ...passed 00:37:19.786 00:37:19.786 Run Summary: Type Total Ran Passed Failed Inactive 00:37:19.786 suites 1 1 n/a 0 0 00:37:19.786 tests 23 23 23 0 0 00:37:19.786 asserts 152 152 152 0 n/a 00:37:19.786 00:37:19.786 Elapsed time = 1.264 seconds 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:19.786 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:20.046 rmmod nvme_tcp 00:37:20.046 rmmod nvme_fabrics 00:37:20.046 rmmod nvme_keyring 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1740157 ']' 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1740157 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1740157 ']' 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1740157 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.046 07:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740157 00:37:20.046 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:37:20.046 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:37:20.046 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740157' 00:37:20.046 killing process with pid 1740157 00:37:20.046 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1740157 00:37:20.046 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1740157 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:20.307 07:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.220 00:37:22.220 real 0m12.262s 00:37:22.220 user 0m9.410s 00:37:22.220 sys 0m6.627s 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:37:22.220 ************************************ 00:37:22.220 END TEST nvmf_bdevio 00:37:22.220 ************************************ 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:37:22.220 00:37:22.220 real 4m59.603s 00:37:22.220 user 10m17.008s 00:37:22.220 sys 2m5.557s 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.220 07:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.220 ************************************ 00:37:22.220 END TEST nvmf_target_core_interrupt_mode 00:37:22.220 ************************************ 00:37:22.480 07:46:50 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:22.480 07:46:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:22.480 07:46:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.480 07:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:22.480 ************************************ 00:37:22.480 START TEST nvmf_interrupt 00:37:22.480 ************************************ 00:37:22.480 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:37:22.480 * Looking for test storage... 00:37:22.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.480 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:22.480 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:37:22.480 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:22.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:22.742 --rc genhtml_branch_coverage=1 00:37:22.742 --rc genhtml_function_coverage=1 00:37:22.742 --rc genhtml_legend=1 00:37:22.742 --rc geninfo_all_blocks=1 00:37:22.742 --rc geninfo_unexecuted_blocks=1 00:37:22.742 00:37:22.742 ' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:22.742 07:46:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:37:22.743 07:46:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:30.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:30.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:30.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:30.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:30.885 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:30.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:30.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:37:30.886 00:37:30.886 --- 10.0.0.2 ping statistics --- 00:37:30.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.886 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:30.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:30.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:37:30.886 00:37:30.886 --- 10.0.0.1 ping statistics --- 00:37:30.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:30.886 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1744860 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1744860 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1744860 ']' 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:30.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.886 07:46:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 [2024-11-26 07:46:57.997313] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:30.886 [2024-11-26 07:46:57.998323] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:37:30.886 [2024-11-26 07:46:57.998363] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:30.886 [2024-11-26 07:46:58.094462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:30.886 [2024-11-26 07:46:58.143743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:30.886 [2024-11-26 07:46:58.143793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:30.886 [2024-11-26 07:46:58.143802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:30.886 [2024-11-26 07:46:58.143809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:30.886 [2024-11-26 07:46:58.143816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:30.886 [2024-11-26 07:46:58.145427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.886 [2024-11-26 07:46:58.145524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.886 [2024-11-26 07:46:58.221671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:30.886 [2024-11-26 07:46:58.222245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:30.886 [2024-11-26 07:46:58.222562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:37:30.886 5000+0 records in 00:37:30.886 5000+0 records out 00:37:30.886 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186367 s, 549 MB/s 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 AIO0 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 [2024-11-26 07:46:58.914473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:30.886 [2024-11-26 07:46:58.954976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1744860 0 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 0 idle 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:30.886 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:30.887 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:30.887 07:46:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744860 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.30 reactor_0' 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744860 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.30 reactor_0 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1744860 1 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 1 idle 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:31.148 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744864 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.00 reactor_1' 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744864 root 20 0 128.2g 41472 31104 S 0.0 0.0 0:00.00 reactor_1 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1745177 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1744860 0 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1744860 0 busy 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:31.409 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744860 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:00.48 reactor_0' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744860 root 20 0 128.2g 42624 31104 R 99.9 0.0 0:00.48 reactor_0 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1744860 1 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1744860 1 busy 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744864 root 20 0 128.2g 42624 31104 R 93.8 0.0 0:00.27 reactor_1' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744864 root 20 0 128.2g 42624 31104 R 93.8 0.0 0:00.27 reactor_1 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:31.669 07:46:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1745177 00:37:41.672 Initializing NVMe Controllers 00:37:41.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:41.672 Controller IO queue size 256, less than required. 00:37:41.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:41.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:41.672 Initialization complete. Launching workers. 00:37:41.672 ======================================================== 00:37:41.672 Latency(us) 00:37:41.672 Device Information : IOPS MiB/s Average min max 00:37:41.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19515.30 76.23 13121.94 4281.41 30713.43 00:37:41.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20576.70 80.38 12443.40 7499.03 51956.29 00:37:41.672 ======================================================== 00:37:41.672 Total : 40092.00 156.61 12773.69 4281.41 51956.29 00:37:41.672 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1744860 0 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 0 idle 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744860 root 20 0 128.2g 42624 31104 S 6.7 0.0 0:20.30 reactor_0' 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744860 root 20 0 128.2g 42624 31104 S 6.7 0.0 0:20.30 reactor_0 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1744860 1 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 1 idle 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:41.672 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744864 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.00 reactor_1' 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744864 root 20 0 128.2g 42624 31104 S 0.0 0.0 0:10.00 reactor_1 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:41.933 07:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:42.874 07:47:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:37:42.874 07:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:37:42.874 07:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:42.874 07:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:42.874 07:47:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1744860 0 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 0 idle 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:44.788 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744860 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.68 reactor_0' 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744860 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:20.68 reactor_0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1744860 1 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1744860 1 idle 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1744860 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1744860 -w 256 00:37:44.789 07:47:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1744864 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.16 reactor_1' 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1744864 root 20 0 128.2g 77184 31104 S 0.0 0.1 0:10.16 reactor_1 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:37:45.048 07:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:45.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.308 rmmod nvme_tcp 00:37:45.308 rmmod nvme_fabrics 00:37:45.308 rmmod nvme_keyring 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1744860 ']' 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1744860 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1744860 ']' 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1744860 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744860 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744860' 00:37:45.308 killing process with pid 1744860 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1744860 00:37:45.308 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1744860 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:45.568 07:47:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.115 07:47:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:48.115 00:37:48.115 real 0m25.208s 00:37:48.115 user 0m40.616s 00:37:48.115 sys 0m9.234s 00:37:48.115 07:47:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.115 07:47:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:48.115 ************************************ 00:37:48.115 END TEST nvmf_interrupt 00:37:48.115 ************************************ 00:37:48.115 00:37:48.115 real 30m8.352s 00:37:48.115 user 61m27.246s 00:37:48.115 sys 10m18.103s 00:37:48.115 07:47:15 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.115 07:47:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.115 ************************************ 00:37:48.115 END TEST nvmf_tcp 00:37:48.115 ************************************ 00:37:48.115 07:47:15 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:37:48.115 07:47:15 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:48.115 07:47:15 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:48.115 07:47:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.115 07:47:15 -- common/autotest_common.sh@10 -- # set +x 00:37:48.115 ************************************ 00:37:48.115 START TEST spdkcli_nvmf_tcp 00:37:48.115 ************************************ 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:48.115 * Looking for test storage... 00:37:48.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.115 --rc genhtml_branch_coverage=1 00:37:48.115 --rc genhtml_function_coverage=1 00:37:48.115 --rc genhtml_legend=1 00:37:48.115 --rc geninfo_all_blocks=1 00:37:48.115 --rc geninfo_unexecuted_blocks=1 00:37:48.115 00:37:48.115 ' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.115 --rc genhtml_branch_coverage=1 00:37:48.115 --rc genhtml_function_coverage=1 00:37:48.115 --rc genhtml_legend=1 00:37:48.115 --rc geninfo_all_blocks=1 00:37:48.115 --rc geninfo_unexecuted_blocks=1 00:37:48.115 00:37:48.115 ' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.115 --rc genhtml_branch_coverage=1 00:37:48.115 --rc genhtml_function_coverage=1 00:37:48.115 --rc genhtml_legend=1 00:37:48.115 --rc geninfo_all_blocks=1 00:37:48.115 --rc geninfo_unexecuted_blocks=1 00:37:48.115 00:37:48.115 ' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:48.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.115 --rc genhtml_branch_coverage=1 00:37:48.115 --rc genhtml_function_coverage=1 00:37:48.115 --rc genhtml_legend=1 00:37:48.115 --rc geninfo_all_blocks=1 00:37:48.115 --rc geninfo_unexecuted_blocks=1 00:37:48.115 00:37:48.115 ' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.115 07:47:15 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:48.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1748415 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1748415 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1748415 ']' 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.116 07:47:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:48.116 [2024-11-26 07:47:16.005209] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:37:48.116 [2024-11-26 07:47:16.005263] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1748415 ] 00:37:48.116 [2024-11-26 07:47:16.093771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:48.116 [2024-11-26 07:47:16.131936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.116 [2024-11-26 07:47:16.131939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:49.056 07:47:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:49.056 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:49.056 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:49.056 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:49.056 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:49.056 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:49.056 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:49.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:49.056 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:49.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:49.057 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:49.057 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:49.057 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:49.057 ' 00:37:51.600 [2024-11-26 07:47:19.556262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.982 [2024-11-26 07:47:20.916530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:55.527 [2024-11-26 07:47:23.443587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:58.077 [2024-11-26 07:47:25.649804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:59.462 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:59.462 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:59.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:59.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:59.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:59.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:59.462 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:59.462 07:47:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:00.033 07:47:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:00.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:00.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:00.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:00.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:00.033 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:00.033 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:00.033 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:00.033 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:00.033 ' 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:06.615 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:06.615 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:06.615 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:06.615 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1748415 ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1748415' 00:38:06.615 killing process with pid 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1748415 ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1748415 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1748415 ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1748415 00:38:06.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1748415) - No such process 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1748415 is not found' 00:38:06.615 Process with pid 1748415 is not found 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:06.615 00:38:06.615 real 0m18.127s 00:38:06.615 user 0m40.325s 00:38:06.615 sys 0m0.834s 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.615 07:47:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:06.615 ************************************ 00:38:06.615 END TEST spdkcli_nvmf_tcp 00:38:06.615 ************************************ 00:38:06.615 07:47:33 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:06.615 07:47:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:06.615 07:47:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.615 07:47:33 -- common/autotest_common.sh@10 -- # set +x 00:38:06.615 ************************************ 00:38:06.615 START TEST nvmf_identify_passthru 00:38:06.615 ************************************ 00:38:06.615 07:47:33 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:06.615 * Looking for test storage... 00:38:06.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.615 07:47:34 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.615 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:06.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.615 --rc genhtml_branch_coverage=1 00:38:06.615 --rc genhtml_function_coverage=1 00:38:06.615 --rc genhtml_legend=1 00:38:06.615 --rc geninfo_all_blocks=1 00:38:06.615 --rc geninfo_unexecuted_blocks=1 00:38:06.616 00:38:06.616 ' 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:06.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.616 --rc genhtml_branch_coverage=1 00:38:06.616 --rc genhtml_function_coverage=1 00:38:06.616 --rc genhtml_legend=1 00:38:06.616 --rc geninfo_all_blocks=1 00:38:06.616 --rc geninfo_unexecuted_blocks=1 00:38:06.616 00:38:06.616 ' 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:06.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.616 --rc genhtml_branch_coverage=1 00:38:06.616 --rc genhtml_function_coverage=1 00:38:06.616 --rc genhtml_legend=1 00:38:06.616 --rc geninfo_all_blocks=1 00:38:06.616 --rc geninfo_unexecuted_blocks=1 00:38:06.616 00:38:06.616 ' 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:06.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.616 --rc genhtml_branch_coverage=1 00:38:06.616 --rc genhtml_function_coverage=1 00:38:06.616 --rc genhtml_legend=1 00:38:06.616 --rc geninfo_all_blocks=1 00:38:06.616 --rc geninfo_unexecuted_blocks=1 00:38:06.616 00:38:06.616 ' 00:38:06.616 07:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:06.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.616 07:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:06.616 07:47:34 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.616 07:47:34 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.616 07:47:34 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.616 07:47:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:13.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:13.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.204 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:13.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:13.205 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.205 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:38:13.467 00:38:13.467 --- 10.0.0.2 ping statistics --- 00:38:13.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.467 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:38:13.467 00:38:13.467 --- 10.0.0.1 ping statistics --- 00:38:13.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.467 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:13.467 07:47:41 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:13.467 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:13.467 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:38:13.467 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:13.468 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:38:13.468 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:38:13.468 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:38:13.468 07:47:41 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:38:13.468 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:38:13.468 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:38:13.468 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:13.468 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:38:13.468 07:47:41 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:38:14.039 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:38:14.039 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:38:14.039 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:38:14.039 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1755764 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:38:14.612 07:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1755764 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1755764 ']' 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:14.612 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.613 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:14.613 07:47:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:14.613 [2024-11-26 07:47:42.637003] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:38:14.613 [2024-11-26 07:47:42.637056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.874 [2024-11-26 07:47:42.728569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:14.874 [2024-11-26 07:47:42.766099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.874 [2024-11-26 07:47:42.766133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.874 [2024-11-26 07:47:42.766141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.874 [2024-11-26 07:47:42.766147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.874 [2024-11-26 07:47:42.766153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.874 [2024-11-26 07:47:42.767692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:14.874 [2024-11-26 07:47:42.767840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:14.874 [2024-11-26 07:47:42.767991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.874 [2024-11-26 07:47:42.767992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:15.446 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:15.446 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:38:15.446 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:38:15.446 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.446 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:15.446 INFO: Log level set to 20 00:38:15.446 INFO: Requests: 00:38:15.446 { 00:38:15.446 "jsonrpc": "2.0", 00:38:15.446 "method": "nvmf_set_config", 00:38:15.447 "id": 1, 00:38:15.447 "params": { 00:38:15.447 "admin_cmd_passthru": { 00:38:15.447 "identify_ctrlr": true 00:38:15.447 } 00:38:15.447 } 00:38:15.447 } 00:38:15.447 00:38:15.447 INFO: response: 00:38:15.447 { 00:38:15.447 "jsonrpc": "2.0", 00:38:15.447 "id": 1, 00:38:15.447 "result": true 00:38:15.447 } 00:38:15.447 00:38:15.447 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.447 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:38:15.447 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.447 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:15.447 INFO: Setting log level to 20 00:38:15.447 INFO: Setting log level to 20 00:38:15.447 INFO: Log level set to 20 00:38:15.447 INFO: Log level set to 20 00:38:15.447 INFO: Requests: 00:38:15.447 { 00:38:15.447 "jsonrpc": "2.0", 00:38:15.447 "method": "framework_start_init", 00:38:15.447 "id": 1 00:38:15.447 } 00:38:15.447 00:38:15.447 INFO: Requests: 00:38:15.447 { 00:38:15.447 "jsonrpc": "2.0", 00:38:15.447 "method": "framework_start_init", 00:38:15.447 "id": 1 00:38:15.447 } 00:38:15.447 00:38:15.708 [2024-11-26 07:47:43.546063] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:38:15.708 INFO: response: 00:38:15.708 { 00:38:15.708 "jsonrpc": "2.0", 00:38:15.708 "id": 1, 00:38:15.708 "result": true 00:38:15.708 } 00:38:15.708 00:38:15.708 INFO: response: 00:38:15.708 { 00:38:15.708 "jsonrpc": "2.0", 00:38:15.708 "id": 1, 00:38:15.708 "result": true 00:38:15.708 } 00:38:15.708 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.708 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:15.708 INFO: Setting log level to 40 00:38:15.708 INFO: Setting log level to 40 00:38:15.708 INFO: Setting log level to 40 00:38:15.708 [2024-11-26 07:47:43.559619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:15.708 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:15.708 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:15.708 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.033 Nvme0n1 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.033 [2024-11-26 07:47:43.961224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.033 [ 00:38:16.033 { 00:38:16.033 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:16.033 "subtype": "Discovery", 00:38:16.033 "listen_addresses": [], 00:38:16.033 "allow_any_host": true, 00:38:16.033 "hosts": [] 00:38:16.033 }, 00:38:16.033 { 00:38:16.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:16.033 "subtype": "NVMe", 00:38:16.033 "listen_addresses": [ 00:38:16.033 { 00:38:16.033 "trtype": "TCP", 00:38:16.033 "adrfam": "IPv4", 00:38:16.033 "traddr": "10.0.0.2", 00:38:16.033 "trsvcid": "4420" 00:38:16.033 } 00:38:16.033 ], 00:38:16.033 "allow_any_host": true, 00:38:16.033 "hosts": [], 00:38:16.033 "serial_number": "SPDK00000000000001", 00:38:16.033 "model_number": "SPDK bdev Controller", 00:38:16.033 "max_namespaces": 1, 00:38:16.033 "min_cntlid": 1, 00:38:16.033 "max_cntlid": 65519, 00:38:16.033 "namespaces": [ 00:38:16.033 { 00:38:16.033 "nsid": 1, 00:38:16.033 "bdev_name": "Nvme0n1", 00:38:16.033 "name": "Nvme0n1", 00:38:16.033 "nguid": "36344730526054870025384500000044", 00:38:16.033 "uuid": "36344730-5260-5487-0025-384500000044" 00:38:16.033 } 00:38:16.033 ] 00:38:16.033 } 00:38:16.033 ] 00:38:16.033 07:47:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:16.033 07:47:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:38:16.327 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:16.598 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:16.598 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:16.598 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:16.598 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:16.598 07:47:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:16.598 rmmod nvme_tcp 00:38:16.598 rmmod nvme_fabrics 00:38:16.598 rmmod nvme_keyring 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1755764 ']' 00:38:16.598 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1755764 00:38:16.598 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1755764 ']' 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1755764 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755764 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755764' 00:38:16.599 killing process with pid 1755764 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1755764 00:38:16.599 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1755764 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:16.859 07:47:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.859 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:16.859 07:47:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.402 07:47:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:19.402 00:38:19.402 real 0m12.972s 00:38:19.402 user 0m10.384s 00:38:19.402 sys 0m6.555s 00:38:19.402 07:47:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.402 07:47:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:19.402 ************************************ 00:38:19.402 END TEST nvmf_identify_passthru 00:38:19.402 ************************************ 00:38:19.402 07:47:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:19.402 07:47:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:19.402 07:47:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.402 07:47:46 -- common/autotest_common.sh@10 -- # set +x 00:38:19.402 ************************************ 00:38:19.402 START TEST nvmf_dif 00:38:19.402 ************************************ 00:38:19.402 07:47:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:19.402 * Looking for test storage... 00:38:19.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:19.402 07:47:47 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:19.402 07:47:47 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:38:19.402 07:47:47 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:19.402 07:47:47 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:38:19.402 07:47:47 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.403 --rc genhtml_branch_coverage=1 00:38:19.403 --rc genhtml_function_coverage=1 00:38:19.403 --rc genhtml_legend=1 00:38:19.403 --rc geninfo_all_blocks=1 00:38:19.403 --rc geninfo_unexecuted_blocks=1 00:38:19.403 00:38:19.403 ' 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.403 --rc genhtml_branch_coverage=1 00:38:19.403 --rc genhtml_function_coverage=1 00:38:19.403 --rc genhtml_legend=1 00:38:19.403 --rc geninfo_all_blocks=1 00:38:19.403 --rc geninfo_unexecuted_blocks=1 00:38:19.403 00:38:19.403 ' 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.403 --rc genhtml_branch_coverage=1 00:38:19.403 --rc genhtml_function_coverage=1 00:38:19.403 --rc genhtml_legend=1 00:38:19.403 --rc geninfo_all_blocks=1 00:38:19.403 --rc geninfo_unexecuted_blocks=1 00:38:19.403 00:38:19.403 ' 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:19.403 --rc genhtml_branch_coverage=1 00:38:19.403 --rc genhtml_function_coverage=1 00:38:19.403 --rc genhtml_legend=1 00:38:19.403 --rc geninfo_all_blocks=1 00:38:19.403 --rc geninfo_unexecuted_blocks=1 00:38:19.403 00:38:19.403 ' 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.403 07:47:47 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.403 07:47:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.403 07:47:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.403 07:47:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.403 07:47:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:19.403 07:47:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:19.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:19.403 07:47:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:19.403 07:47:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:38:19.403 07:47:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:27.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:27.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:27.543 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:27.543 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:27.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:27.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:38:27.543 00:38:27.543 --- 10.0.0.2 ping statistics --- 00:38:27.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.543 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:38:27.543 07:47:54 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:27.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:27.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:38:27.543 00:38:27.543 --- 10.0.0.1 ping statistics --- 00:38:27.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:27.544 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:38:27.544 07:47:54 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:27.544 07:47:54 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:38:27.544 07:47:54 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:27.544 07:47:54 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:30.091 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:38:30.092 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:30.092 07:47:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:30.092 07:47:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1761702 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1761702 00:38:30.092 07:47:58 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1761702 ']' 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:30.092 07:47:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:30.353 [2024-11-26 07:47:58.231550] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:38:30.353 [2024-11-26 07:47:58.231602] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:30.353 [2024-11-26 07:47:58.327917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.353 [2024-11-26 07:47:58.379257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:30.353 [2024-11-26 07:47:58.379306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:30.353 [2024-11-26 07:47:58.379315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:30.353 [2024-11-26 07:47:58.379323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:30.353 [2024-11-26 07:47:58.379330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:30.353 [2024-11-26 07:47:58.380119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:30.927 07:47:59 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.927 07:47:59 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:38:30.927 07:47:59 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:30.927 07:47:59 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:30.927 07:47:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 07:47:59 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:31.188 07:47:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:31.188 07:47:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 [2024-11-26 07:47:59.058521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.188 07:47:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 ************************************ 00:38:31.188 START TEST fio_dif_1_default 00:38:31.188 ************************************ 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 bdev_null0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:31.188 [2024-11-26 07:47:59.142900] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:31.188 { 00:38:31.188 "params": { 00:38:31.188 "name": "Nvme$subsystem", 00:38:31.188 "trtype": "$TEST_TRANSPORT", 00:38:31.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:31.188 "adrfam": "ipv4", 00:38:31.188 "trsvcid": "$NVMF_PORT", 00:38:31.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:31.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:31.188 "hdgst": ${hdgst:-false}, 00:38:31.188 "ddgst": ${ddgst:-false} 00:38:31.188 }, 00:38:31.188 "method": "bdev_nvme_attach_controller" 00:38:31.188 } 00:38:31.188 EOF 00:38:31.188 )") 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:31.188 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:31.189 "params": { 00:38:31.189 "name": "Nvme0", 00:38:31.189 "trtype": "tcp", 00:38:31.189 "traddr": "10.0.0.2", 00:38:31.189 "adrfam": "ipv4", 00:38:31.189 "trsvcid": "4420", 00:38:31.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.189 "hdgst": false, 00:38:31.189 "ddgst": false 00:38:31.189 }, 00:38:31.189 "method": "bdev_nvme_attach_controller" 00:38:31.189 }' 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:31.189 07:47:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:31.783 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:31.783 fio-3.35 00:38:31.783 Starting 1 thread 00:38:44.022 00:38:44.022 filename0: (groupid=0, jobs=1): err= 0: pid=1762243: Tue Nov 26 07:48:10 2024 00:38:44.022 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10031msec) 00:38:44.022 slat (nsec): min=5500, max=41081, avg=6314.85, stdev=1874.45 00:38:44.022 clat (usec): min=40844, max=43016, avg=41093.13, stdev=324.45 00:38:44.022 lat (usec): min=40849, max=43057, avg=41099.44, stdev=325.38 00:38:44.022 clat percentiles (usec): 00:38:44.022 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:38:44.022 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:38:44.022 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:38:44.023 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:44.023 | 99.99th=[43254] 00:38:44.023 bw ( KiB/s): min= 352, max= 416, per=99.69%, avg=388.80, stdev=15.66, samples=20 00:38:44.023 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:38:44.023 lat (msec) : 50=100.00% 00:38:44.023 cpu : usr=93.35%, sys=6.41%, ctx=7, majf=0, minf=234 00:38:44.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:44.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:44.023 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:44.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:44.023 00:38:44.023 Run status group 0 (all jobs): 00:38:44.023 READ: bw=389KiB/s (399kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10031-10031msec 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 00:38:44.023 real 0m11.251s 00:38:44.023 user 0m27.308s 00:38:44.023 sys 0m1.034s 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 ************************************ 00:38:44.023 END TEST fio_dif_1_default 00:38:44.023 ************************************ 00:38:44.023 07:48:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:44.023 07:48:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:44.023 07:48:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 ************************************ 00:38:44.023 START TEST fio_dif_1_multi_subsystems 00:38:44.023 ************************************ 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 bdev_null0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 [2024-11-26 07:48:10.477266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 bdev_null1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.023 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.024 { 00:38:44.024 "params": { 00:38:44.024 "name": "Nvme$subsystem", 00:38:44.024 "trtype": "$TEST_TRANSPORT", 00:38:44.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.024 "adrfam": "ipv4", 00:38:44.024 "trsvcid": "$NVMF_PORT", 00:38:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.024 "hdgst": ${hdgst:-false}, 00:38:44.024 "ddgst": ${ddgst:-false} 00:38:44.024 }, 00:38:44.024 "method": "bdev_nvme_attach_controller" 00:38:44.024 } 00:38:44.024 EOF 00:38:44.024 )") 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.024 { 00:38:44.024 "params": { 00:38:44.024 "name": "Nvme$subsystem", 00:38:44.024 "trtype": "$TEST_TRANSPORT", 00:38:44.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.024 "adrfam": "ipv4", 00:38:44.024 "trsvcid": "$NVMF_PORT", 00:38:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.024 "hdgst": ${hdgst:-false}, 00:38:44.024 "ddgst": ${ddgst:-false} 00:38:44.024 }, 00:38:44.024 "method": "bdev_nvme_attach_controller" 00:38:44.024 } 00:38:44.024 EOF 00:38:44.024 )") 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:44.024 "params": { 00:38:44.024 "name": "Nvme0", 00:38:44.024 "trtype": "tcp", 00:38:44.024 "traddr": "10.0.0.2", 00:38:44.024 "adrfam": "ipv4", 00:38:44.024 "trsvcid": "4420", 00:38:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:44.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:44.024 "hdgst": false, 00:38:44.024 "ddgst": false 00:38:44.024 }, 00:38:44.024 "method": "bdev_nvme_attach_controller" 00:38:44.024 },{ 00:38:44.024 "params": { 00:38:44.024 "name": "Nvme1", 00:38:44.024 "trtype": "tcp", 00:38:44.024 "traddr": "10.0.0.2", 00:38:44.024 "adrfam": "ipv4", 00:38:44.024 "trsvcid": "4420", 00:38:44.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:44.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:44.024 "hdgst": false, 00:38:44.024 "ddgst": false 00:38:44.024 }, 00:38:44.024 "method": "bdev_nvme_attach_controller" 00:38:44.024 }' 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:44.024 07:48:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:44.024 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:44.024 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:44.024 fio-3.35 00:38:44.024 Starting 2 threads 00:38:54.027 00:38:54.027 filename0: (groupid=0, jobs=1): err= 0: pid=1765283: Tue Nov 26 07:48:21 2024 00:38:54.027 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10005msec) 00:38:54.027 slat (nsec): min=5463, max=32852, avg=6443.50, stdev=2326.02 00:38:54.027 clat (usec): min=602, max=42025, avg=21088.23, stdev=20168.04 00:38:54.027 lat (usec): min=608, max=42058, avg=21094.67, stdev=20168.01 00:38:54.027 clat percentiles (usec): 00:38:54.027 | 1.00th=[ 644], 5.00th=[ 775], 10.00th=[ 807], 20.00th=[ 832], 00:38:54.027 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:38:54.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:54.027 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:38:54.027 | 99.99th=[42206] 00:38:54.027 bw ( KiB/s): min= 672, max= 768, per=49.87%, avg=756.80, stdev=26.01, samples=20 00:38:54.027 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:38:54.027 lat (usec) : 750=3.53%, 1000=43.99% 00:38:54.027 lat (msec) : 2=2.27%, 50=50.21% 00:38:54.027 cpu : usr=95.47%, sys=4.32%, ctx=13, majf=0, minf=107 00:38:54.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.027 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.027 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:54.027 filename1: (groupid=0, jobs=1): err= 0: pid=1765284: Tue Nov 26 07:48:21 2024 00:38:54.027 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:38:54.027 slat (nsec): min=5468, max=35178, avg=6460.63, stdev=2424.28 00:38:54.027 clat (usec): min=514, max=42533, avg=21083.29, stdev=20172.73 00:38:54.027 lat (usec): min=520, max=42539, avg=21089.75, stdev=20172.68 00:38:54.027 clat percentiles (usec): 00:38:54.027 | 1.00th=[ 611], 5.00th=[ 734], 10.00th=[ 807], 20.00th=[ 832], 00:38:54.027 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[40633], 60.00th=[41157], 00:38:54.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:38:54.027 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:54.027 | 99.99th=[42730] 00:38:54.027 bw ( KiB/s): min= 672, max= 768, per=50.06%, avg=759.58, stdev=25.78, samples=19 00:38:54.027 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:38:54.027 lat (usec) : 750=5.49%, 1000=43.30% 00:38:54.027 lat (msec) : 2=1.00%, 50=50.21% 00:38:54.027 cpu : usr=95.40%, sys=4.40%, ctx=14, majf=0, minf=144 00:38:54.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:54.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:54.027 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:54.027 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:54.027 00:38:54.027 Run status group 0 (all jobs): 00:38:54.027 READ: bw=1516KiB/s (1552kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=14.8MiB (15.5MB), run=10002-10005msec 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.027 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:54.028 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.028 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.028 00:38:54.028 real 0m11.471s 00:38:54.028 user 0m34.796s 00:38:54.028 sys 0m1.239s 00:38:54.028 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:54.028 07:48:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 ************************************ 00:38:54.028 END TEST fio_dif_1_multi_subsystems 00:38:54.028 ************************************ 00:38:54.028 07:48:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:54.028 07:48:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:54.028 07:48:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:54.028 07:48:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 ************************************ 00:38:54.028 START TEST fio_dif_rand_params 00:38:54.028 ************************************ 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.028 07:48:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 bdev_null0 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:54.028 [2024-11-26 07:48:22.035717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:54.028 { 00:38:54.028 "params": { 00:38:54.028 "name": "Nvme$subsystem", 00:38:54.028 "trtype": "$TEST_TRANSPORT", 00:38:54.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:54.028 "adrfam": "ipv4", 00:38:54.028 "trsvcid": "$NVMF_PORT", 00:38:54.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:54.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:54.028 "hdgst": ${hdgst:-false}, 00:38:54.028 "ddgst": ${ddgst:-false} 00:38:54.028 }, 00:38:54.028 "method": "bdev_nvme_attach_controller" 00:38:54.028 } 00:38:54.028 EOF 00:38:54.028 )") 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:54.028 "params": { 00:38:54.028 "name": "Nvme0", 00:38:54.028 "trtype": "tcp", 00:38:54.028 "traddr": "10.0.0.2", 00:38:54.028 "adrfam": "ipv4", 00:38:54.028 "trsvcid": "4420", 00:38:54.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:54.028 "hdgst": false, 00:38:54.028 "ddgst": false 00:38:54.028 }, 00:38:54.028 "method": "bdev_nvme_attach_controller" 00:38:54.028 }' 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:54.028 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:54.321 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:54.321 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:54.321 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:54.321 07:48:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:54.587 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:54.587 ... 00:38:54.587 fio-3.35 00:38:54.587 Starting 3 threads 00:39:01.172 00:39:01.172 filename0: (groupid=0, jobs=1): err= 0: pid=1767511: Tue Nov 26 07:48:28 2024 00:39:01.172 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(195MiB/5046msec) 00:39:01.172 slat (nsec): min=5477, max=32526, avg=6264.77, stdev=1346.40 00:39:01.172 clat (usec): min=5137, max=90456, avg=9648.44, stdev=7913.64 00:39:01.172 lat (usec): min=5144, max=90461, avg=9654.70, stdev=7913.60 00:39:01.172 clat percentiles (usec): 00:39:01.172 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:39:01.172 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:39:01.172 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10290], 00:39:01.172 | 99.00th=[49546], 99.50th=[50594], 99.90th=[89654], 99.95th=[90702], 00:39:01.172 | 99.99th=[90702] 00:39:01.172 bw ( KiB/s): min=27392, max=45568, per=32.26%, avg=39961.60, stdev=5601.47, samples=10 00:39:01.172 iops : min= 214, max= 356, avg=312.20, stdev=43.76, samples=10 00:39:01.172 lat (msec) : 10=93.86%, 20=3.01%, 50=2.30%, 100=0.83% 00:39:01.172 cpu : usr=94.37%, sys=5.39%, ctx=7, majf=0, minf=70 00:39:01.172 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.172 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.172 filename0: (groupid=0, jobs=1): err= 0: pid=1767512: Tue Nov 26 07:48:28 2024 00:39:01.172 read: IOPS=336, BW=42.0MiB/s (44.1MB/s)(212MiB/5046msec) 00:39:01.172 slat (nsec): min=5558, max=31664, avg=8194.85, stdev=1460.46 00:39:01.172 clat (usec): min=4899, max=89439, avg=8890.11, stdev=4354.86 00:39:01.172 lat (usec): min=4908, max=89445, avg=8898.31, stdev=4354.71 00:39:01.172 clat percentiles (usec): 00:39:01.172 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7242], 00:39:01.172 | 30.00th=[ 7701], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:39:01.172 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10814], 00:39:01.172 | 99.00th=[14091], 99.50th=[46924], 99.90th=[49546], 99.95th=[89654], 00:39:01.172 | 99.99th=[89654] 00:39:01.172 bw ( KiB/s): min=35072, max=47104, per=35.01%, avg=43366.40, stdev=3848.90, samples=10 00:39:01.172 iops : min= 274, max= 368, avg=338.80, stdev=30.07, samples=10 00:39:01.172 lat (msec) : 10=86.38%, 20=12.68%, 50=0.88%, 100=0.06% 00:39:01.172 cpu : usr=94.57%, sys=5.19%, ctx=6, majf=0, minf=119 00:39:01.172 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.172 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.172 filename0: (groupid=0, jobs=1): err= 0: pid=1767513: Tue Nov 26 07:48:28 2024 00:39:01.172 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(203MiB/5046msec) 00:39:01.172 slat (nsec): min=5713, max=32666, avg=8258.26, stdev=1613.76 00:39:01.172 clat (usec): min=4158, max=51549, avg=9283.96, stdev=4166.27 00:39:01.172 lat (usec): min=4167, max=51558, avg=9292.21, stdev=4166.35 00:39:01.172 clat percentiles (usec): 00:39:01.172 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7767], 00:39:01.172 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:39:01.172 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10421], 95.00th=[10814], 00:39:01.172 | 99.00th=[45351], 99.50th=[47449], 99.90th=[50594], 99.95th=[51643], 00:39:01.172 | 99.99th=[51643] 00:39:01.172 bw ( KiB/s): min=35072, max=43520, per=33.52%, avg=41523.20, stdev=2469.66, samples=10 00:39:01.172 iops : min= 274, max= 340, avg=324.40, stdev=19.29, samples=10 00:39:01.172 lat (msec) : 10=80.17%, 20=18.78%, 50=0.92%, 100=0.12% 00:39:01.172 cpu : usr=95.10%, sys=4.64%, ctx=6, majf=0, minf=84 00:39:01.172 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:01.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:01.172 issued rwts: total=1624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:01.172 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:01.172 00:39:01.172 Run status group 0 (all jobs): 00:39:01.172 READ: bw=121MiB/s (127MB/s), 38.7MiB/s-42.0MiB/s (40.6MB/s-44.1MB/s), io=610MiB (640MB), run=5046-5046msec 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 bdev_null0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.172 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 [2024-11-26 07:48:28.310836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 bdev_null1 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 bdev_null2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.173 { 00:39:01.173 "params": { 00:39:01.173 "name": "Nvme$subsystem", 00:39:01.173 "trtype": "$TEST_TRANSPORT", 00:39:01.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.173 "adrfam": "ipv4", 00:39:01.173 "trsvcid": "$NVMF_PORT", 00:39:01.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.173 "hdgst": ${hdgst:-false}, 00:39:01.173 "ddgst": ${ddgst:-false} 00:39:01.173 }, 00:39:01.173 "method": "bdev_nvme_attach_controller" 00:39:01.173 } 00:39:01.173 EOF 00:39:01.173 )") 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.173 { 00:39:01.173 "params": { 00:39:01.173 "name": "Nvme$subsystem", 00:39:01.173 "trtype": "$TEST_TRANSPORT", 00:39:01.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.173 "adrfam": "ipv4", 00:39:01.173 "trsvcid": "$NVMF_PORT", 00:39:01.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.173 "hdgst": ${hdgst:-false}, 00:39:01.173 "ddgst": ${ddgst:-false} 00:39:01.173 }, 00:39:01.173 "method": "bdev_nvme_attach_controller" 00:39:01.173 } 00:39:01.173 EOF 00:39:01.173 )") 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.173 { 00:39:01.173 "params": { 00:39:01.173 "name": "Nvme$subsystem", 00:39:01.173 "trtype": "$TEST_TRANSPORT", 00:39:01.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.173 "adrfam": "ipv4", 00:39:01.173 "trsvcid": "$NVMF_PORT", 00:39:01.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.173 "hdgst": ${hdgst:-false}, 00:39:01.173 "ddgst": ${ddgst:-false} 00:39:01.173 }, 00:39:01.173 "method": "bdev_nvme_attach_controller" 00:39:01.173 } 00:39:01.173 EOF 00:39:01.173 )") 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:01.173 07:48:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.173 "params": { 00:39:01.173 "name": "Nvme0", 00:39:01.173 "trtype": "tcp", 00:39:01.173 "traddr": "10.0.0.2", 00:39:01.173 "adrfam": "ipv4", 00:39:01.173 "trsvcid": "4420", 00:39:01.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:01.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:01.173 "hdgst": false, 00:39:01.173 "ddgst": false 00:39:01.173 }, 00:39:01.173 "method": "bdev_nvme_attach_controller" 00:39:01.173 },{ 00:39:01.173 "params": { 00:39:01.173 "name": "Nvme1", 00:39:01.173 "trtype": "tcp", 00:39:01.173 "traddr": "10.0.0.2", 00:39:01.173 "adrfam": "ipv4", 00:39:01.173 "trsvcid": "4420", 00:39:01.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.173 "hdgst": false, 00:39:01.173 "ddgst": false 00:39:01.174 }, 00:39:01.174 "method": "bdev_nvme_attach_controller" 00:39:01.174 },{ 00:39:01.174 "params": { 00:39:01.174 "name": "Nvme2", 00:39:01.174 "trtype": "tcp", 00:39:01.174 "traddr": "10.0.0.2", 00:39:01.174 "adrfam": "ipv4", 00:39:01.174 "trsvcid": "4420", 00:39:01.174 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:01.174 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:01.174 "hdgst": false, 00:39:01.174 "ddgst": false 00:39:01.174 }, 00:39:01.174 "method": "bdev_nvme_attach_controller" 00:39:01.174 }' 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:01.174 07:48:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:01.174 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.174 ... 00:39:01.174 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.174 ... 00:39:01.174 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:01.174 ... 00:39:01.174 fio-3.35 00:39:01.174 Starting 24 threads 00:39:13.409 00:39:13.409 filename0: (groupid=0, jobs=1): err= 0: pid=1769014: Tue Nov 26 07:48:40 2024 00:39:13.409 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10002msec) 00:39:13.409 slat (nsec): min=5697, max=65786, avg=12414.41, stdev=9296.56 00:39:13.409 clat (usec): min=6181, max=25313, avg=23522.10, stdev=1672.17 00:39:13.409 lat (usec): min=6192, max=25320, avg=23534.51, stdev=1671.87 00:39:13.409 clat percentiles (usec): 00:39:13.409 | 1.00th=[10290], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:39:13.410 | 99.99th=[25297] 00:39:13.410 bw ( KiB/s): min= 2682, max= 3120, per=4.18%, avg=2709.47, stdev=99.44, samples=19 00:39:13.410 iops : min= 670, max= 780, avg=677.26, stdev=24.89, samples=19 00:39:13.410 lat (msec) : 10=0.94%, 20=0.56%, 50=98.49% 00:39:13.410 cpu : usr=98.52%, sys=1.06%, ctx=69, majf=0, minf=9 00:39:13.410 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769015: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=677, BW=2710KiB/s (2775kB/s)(26.5MiB/10014msec) 00:39:13.410 slat (nsec): min=5696, max=76114, avg=18172.13, stdev=11726.25 00:39:13.410 clat (usec): min=4718, max=28594, avg=23463.20, stdev=1677.47 00:39:13.410 lat (usec): min=4736, max=28601, avg=23481.37, stdev=1676.75 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[14353], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:39:13.410 | 99.99th=[28705] 00:39:13.410 bw ( KiB/s): min= 2682, max= 3072, per=4.18%, avg=2706.00, stdev=86.18, samples=20 00:39:13.410 iops : min= 670, max= 768, avg=676.40, stdev=21.58, samples=20 00:39:13.410 lat (msec) : 10=0.47%, 20=1.44%, 50=98.08% 00:39:13.410 cpu : usr=98.13%, sys=1.25%, ctx=115, majf=0, minf=9 00:39:13.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769016: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.3MiB/10003msec) 00:39:13.410 slat (nsec): min=5700, max=75421, avg=20035.44, stdev=10402.55 00:39:13.410 clat (usec): min=5215, max=47811, avg=23583.93, stdev=1664.78 00:39:13.410 lat (usec): min=5221, max=47838, avg=23603.96, stdev=1665.35 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[20317], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[24773], 99.50th=[25035], 99.90th=[43779], 99.95th=[43779], 00:39:13.410 | 99.99th=[47973] 00:39:13.410 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2679.32, stdev=27.84, samples=19 00:39:13.410 iops : min= 641, max= 672, avg=669.63, stdev= 7.00, samples=19 00:39:13.410 lat (msec) : 10=0.48%, 20=0.45%, 50=99.08% 00:39:13.410 cpu : usr=98.85%, sys=0.89%, ctx=12, majf=0, minf=9 00:39:13.410 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769017: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10002msec) 00:39:13.410 slat (nsec): min=5552, max=83326, avg=20944.36, stdev=12357.99 00:39:13.410 clat (usec): min=2818, max=43519, avg=23577.31, stdev=1599.84 00:39:13.410 lat (usec): min=2824, max=43542, avg=23598.25, stdev=1601.48 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[24773], 99.50th=[24773], 99.90th=[43254], 99.95th=[43254], 00:39:13.410 | 99.99th=[43779] 00:39:13.410 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2679.32, stdev=27.84, samples=19 00:39:13.410 iops : min= 641, max= 672, avg=669.63, stdev= 7.00, samples=19 00:39:13.410 lat (msec) : 4=0.09%, 10=0.30%, 20=0.33%, 50=99.29% 00:39:13.410 cpu : usr=98.98%, sys=0.77%, ctx=13, majf=0, minf=9 00:39:13.410 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769018: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.3MiB/10014msec) 00:39:13.410 slat (nsec): min=5656, max=62815, avg=17659.45, stdev=10195.49 00:39:13.410 clat (usec): min=13552, max=35441, avg=23608.61, stdev=1614.65 00:39:13.410 lat (usec): min=13566, max=35470, avg=23626.27, stdev=1615.07 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[17695], 5.00th=[20579], 10.00th=[23200], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24511], 00:39:13.410 | 99.00th=[29754], 99.50th=[31065], 99.90th=[33817], 99.95th=[35390], 00:39:13.410 | 99.99th=[35390] 00:39:13.410 bw ( KiB/s): min= 2554, max= 2816, per=4.15%, avg=2689.16, stdev=58.56, samples=19 00:39:13.410 iops : min= 638, max= 704, avg=672.11, stdev=14.69, samples=19 00:39:13.410 lat (msec) : 20=3.97%, 50=96.03% 00:39:13.410 cpu : usr=98.23%, sys=1.15%, ctx=228, majf=0, minf=9 00:39:13.410 IO depths : 1=4.8%, 2=9.6%, 4=20.2%, 8=56.9%, 16=8.5%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=93.0%, 8=1.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769019: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10014msec) 00:39:13.410 slat (nsec): min=5651, max=66495, avg=17441.42, stdev=11747.96 00:39:13.410 clat (usec): min=11221, max=34638, avg=23521.47, stdev=1513.16 00:39:13.410 lat (usec): min=11228, max=34659, avg=23538.91, stdev=1513.04 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[15795], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24511], 00:39:13.410 | 99.00th=[27395], 99.50th=[29230], 99.90th=[30802], 99.95th=[34866], 00:39:13.410 | 99.99th=[34866] 00:39:13.410 bw ( KiB/s): min= 2560, max= 2816, per=4.17%, avg=2700.79, stdev=58.95, samples=19 00:39:13.410 iops : min= 640, max= 704, avg=675.11, stdev=14.77, samples=19 00:39:13.410 lat (msec) : 20=2.96%, 50=97.04% 00:39:13.410 cpu : usr=98.91%, sys=0.80%, ctx=83, majf=0, minf=9 00:39:13.410 IO depths : 1=5.5%, 2=11.5%, 4=24.4%, 8=51.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769020: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=672, BW=2691KiB/s (2755kB/s)(26.3MiB/10013msec) 00:39:13.410 slat (nsec): min=5693, max=50753, avg=11582.30, stdev=6922.14 00:39:13.410 clat (usec): min=14453, max=32639, avg=23683.89, stdev=787.91 00:39:13.410 lat (usec): min=14459, max=32653, avg=23695.47, stdev=787.85 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:39:13.410 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[24773], 99.50th=[24773], 99.90th=[32375], 99.95th=[32375], 00:39:13.410 | 99.99th=[32637] 00:39:13.410 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2686.42, stdev=42.75, samples=19 00:39:13.410 iops : min= 640, max= 704, avg=671.47, stdev=10.70, samples=19 00:39:13.410 lat (msec) : 20=0.68%, 50=99.32% 00:39:13.410 cpu : usr=98.67%, sys=0.86%, ctx=164, majf=0, minf=9 00:39:13.410 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.410 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.410 filename0: (groupid=0, jobs=1): err= 0: pid=1769021: Tue Nov 26 07:48:40 2024 00:39:13.410 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10002msec) 00:39:13.410 slat (nsec): min=5648, max=59607, avg=13581.44, stdev=9164.86 00:39:13.410 clat (usec): min=12301, max=33077, avg=23646.84, stdev=1475.55 00:39:13.410 lat (usec): min=12317, max=33102, avg=23660.42, stdev=1475.83 00:39:13.410 clat percentiles (usec): 00:39:13.410 | 1.00th=[16581], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.410 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.410 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.410 | 99.00th=[30016], 99.50th=[31589], 99.90th=[32900], 99.95th=[33162], 00:39:13.410 | 99.99th=[33162] 00:39:13.410 bw ( KiB/s): min= 2560, max= 2938, per=4.16%, avg=2693.47, stdev=78.60, samples=19 00:39:13.410 iops : min= 640, max= 734, avg=673.26, stdev=19.58, samples=19 00:39:13.410 lat (msec) : 20=2.66%, 50=97.34% 00:39:13.410 cpu : usr=98.99%, sys=0.72%, ctx=64, majf=0, minf=9 00:39:13.410 IO depths : 1=5.6%, 2=11.5%, 4=24.1%, 8=51.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:39:13.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.410 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769022: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10002msec) 00:39:13.411 slat (nsec): min=5668, max=67412, avg=13047.09, stdev=8690.70 00:39:13.411 clat (usec): min=12570, max=38370, avg=23744.99, stdev=2061.92 00:39:13.411 lat (usec): min=12576, max=38386, avg=23758.03, stdev=2062.38 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[15926], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[25297], 00:39:13.411 | 99.00th=[32375], 99.50th=[32900], 99.90th=[33424], 99.95th=[36963], 00:39:13.411 | 99.99th=[38536] 00:39:13.411 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2682.53, stdev=53.51, samples=19 00:39:13.411 iops : min= 640, max= 704, avg=670.53, stdev=13.38, samples=19 00:39:13.411 lat (msec) : 20=4.47%, 50=95.53% 00:39:13.411 cpu : usr=98.86%, sys=0.82%, ctx=117, majf=0, minf=9 00:39:13.411 IO depths : 1=3.7%, 2=9.1%, 4=22.3%, 8=56.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769023: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=673, BW=2692KiB/s (2757kB/s)(26.3MiB/10008msec) 00:39:13.411 slat (nsec): min=5665, max=79019, avg=19123.83, stdev=13429.05 00:39:13.411 clat (usec): min=12816, max=26889, avg=23608.32, stdev=671.88 00:39:13.411 lat (usec): min=12825, max=26921, avg=23627.44, stdev=670.91 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[22152], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.411 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:39:13.411 | 99.99th=[26870] 00:39:13.411 bw ( KiB/s): min= 2682, max= 2816, per=4.16%, avg=2693.47, stdev=29.78, samples=19 00:39:13.411 iops : min= 670, max= 704, avg=673.26, stdev= 7.49, samples=19 00:39:13.411 lat (msec) : 20=0.71%, 50=99.29% 00:39:13.411 cpu : usr=98.91%, sys=0.82%, ctx=13, majf=0, minf=9 00:39:13.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769024: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=671, BW=2688KiB/s (2752kB/s)(26.2MiB/10001msec) 00:39:13.411 slat (nsec): min=5720, max=84554, avg=23048.06, stdev=12983.46 00:39:13.411 clat (usec): min=11462, max=48018, avg=23588.56, stdev=1276.64 00:39:13.411 lat (usec): min=11480, max=48063, avg=23611.61, stdev=1277.08 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[23987], 00:39:13.411 | 99.00th=[24773], 99.50th=[24773], 99.90th=[43779], 99.95th=[43779], 00:39:13.411 | 99.99th=[47973] 00:39:13.411 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2679.32, stdev=27.84, samples=19 00:39:13.411 iops : min= 641, max= 672, avg=669.63, stdev= 7.00, samples=19 00:39:13.411 lat (msec) : 20=0.51%, 50=99.49% 00:39:13.411 cpu : usr=98.97%, sys=0.73%, ctx=61, majf=0, minf=9 00:39:13.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769025: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10011msec) 00:39:13.411 slat (nsec): min=5629, max=75869, avg=20112.87, stdev=12743.34 00:39:13.411 clat (usec): min=8720, max=34293, avg=23578.14, stdev=1110.07 00:39:13.411 lat (usec): min=8726, max=34310, avg=23598.26, stdev=1110.75 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.411 | 99.00th=[24773], 99.50th=[25035], 99.90th=[34341], 99.95th=[34341], 00:39:13.411 | 99.99th=[34341] 00:39:13.411 bw ( KiB/s): min= 2560, max= 2810, per=4.15%, avg=2686.11, stdev=41.75, samples=19 00:39:13.411 iops : min= 640, max= 702, avg=671.37, stdev=10.37, samples=19 00:39:13.411 lat (msec) : 10=0.21%, 20=0.50%, 50=99.29% 00:39:13.411 cpu : usr=98.81%, sys=0.88%, ctx=68, majf=0, minf=9 00:39:13.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769026: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=677, BW=2709KiB/s (2774kB/s)(26.5MiB/10001msec) 00:39:13.411 slat (nsec): min=5657, max=51365, avg=14218.80, stdev=8825.83 00:39:13.411 clat (usec): min=7854, max=36388, avg=23483.78, stdev=1692.61 00:39:13.411 lat (usec): min=7861, max=36398, avg=23498.00, stdev=1692.99 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[13829], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.411 | 99.00th=[24773], 99.50th=[25035], 99.90th=[36439], 99.95th=[36439], 00:39:13.411 | 99.99th=[36439] 00:39:13.411 bw ( KiB/s): min= 2682, max= 2949, per=4.18%, avg=2707.21, stdev=65.67, samples=19 00:39:13.411 iops : min= 670, max= 737, avg=676.68, stdev=16.41, samples=19 00:39:13.411 lat (msec) : 10=0.65%, 20=1.54%, 50=97.82% 00:39:13.411 cpu : usr=98.50%, sys=1.06%, ctx=75, majf=0, minf=9 00:39:13.411 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769027: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=679, BW=2719KiB/s (2784kB/s)(26.6MiB/10004msec) 00:39:13.411 slat (nsec): min=5661, max=69088, avg=10459.53, stdev=7281.34 00:39:13.411 clat (usec): min=4837, max=25401, avg=23453.50, stdev=1928.90 00:39:13.411 lat (usec): min=4860, max=25408, avg=23463.96, stdev=1927.60 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[11863], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.411 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.411 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:39:13.411 | 99.99th=[25297] 00:39:13.411 bw ( KiB/s): min= 2682, max= 3072, per=4.20%, avg=2720.42, stdev=93.94, samples=19 00:39:13.411 iops : min= 670, max= 768, avg=680.00, stdev=23.49, samples=19 00:39:13.411 lat (msec) : 10=0.88%, 20=1.47%, 50=97.65% 00:39:13.411 cpu : usr=98.92%, sys=0.81%, ctx=12, majf=0, minf=9 00:39:13.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769028: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=675, BW=2703KiB/s (2767kB/s)(26.4MiB/10017msec) 00:39:13.411 slat (nsec): min=5655, max=79676, avg=15574.89, stdev=10962.11 00:39:13.411 clat (usec): min=11494, max=31237, avg=23548.53, stdev=1143.62 00:39:13.411 lat (usec): min=11509, max=31254, avg=23564.11, stdev=1143.49 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[15926], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.411 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.411 | 99.00th=[24511], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:39:13.411 | 99.99th=[31327] 00:39:13.411 bw ( KiB/s): min= 2682, max= 2816, per=4.17%, avg=2699.89, stdev=39.95, samples=19 00:39:13.411 iops : min= 670, max= 704, avg=674.84, stdev= 9.96, samples=19 00:39:13.411 lat (msec) : 20=1.68%, 50=98.32% 00:39:13.411 cpu : usr=98.93%, sys=0.80%, ctx=10, majf=0, minf=9 00:39:13.411 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.411 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.411 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.411 filename1: (groupid=0, jobs=1): err= 0: pid=1769029: Tue Nov 26 07:48:40 2024 00:39:13.411 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10004msec) 00:39:13.411 slat (nsec): min=5557, max=75940, avg=22713.58, stdev=11881.73 00:39:13.411 clat (usec): min=4902, max=44286, avg=23564.39, stdev=1521.25 00:39:13.411 lat (usec): min=4907, max=44309, avg=23587.10, stdev=1521.81 00:39:13.411 clat percentiles (usec): 00:39:13.411 | 1.00th=[20579], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.411 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.412 | 99.00th=[24773], 99.50th=[24773], 99.90th=[44303], 99.95th=[44303], 00:39:13.412 | 99.99th=[44303] 00:39:13.412 bw ( KiB/s): min= 2565, max= 2688, per=4.14%, avg=2679.32, stdev=27.84, samples=19 00:39:13.412 iops : min= 641, max= 672, avg=669.63, stdev= 7.00, samples=19 00:39:13.412 lat (msec) : 10=0.30%, 20=0.48%, 50=99.23% 00:39:13.412 cpu : usr=98.50%, sys=1.02%, ctx=166, majf=0, minf=9 00:39:13.412 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769030: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10015msec) 00:39:13.412 slat (nsec): min=5670, max=79943, avg=12555.35, stdev=9238.68 00:39:13.412 clat (usec): min=11467, max=32577, avg=23573.77, stdev=1254.26 00:39:13.412 lat (usec): min=11507, max=32584, avg=23586.32, stdev=1253.93 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[14877], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.412 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.412 | 99.00th=[24773], 99.50th=[24773], 99.90th=[31589], 99.95th=[32375], 00:39:13.412 | 99.99th=[32637] 00:39:13.412 bw ( KiB/s): min= 2682, max= 2816, per=4.17%, avg=2700.79, stdev=40.68, samples=19 00:39:13.412 iops : min= 670, max= 704, avg=675.11, stdev=10.21, samples=19 00:39:13.412 lat (msec) : 20=1.80%, 50=98.20% 00:39:13.412 cpu : usr=98.71%, sys=0.94%, ctx=111, majf=0, minf=9 00:39:13.412 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769031: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.3MiB/10006msec) 00:39:13.412 slat (nsec): min=5645, max=55517, avg=12701.79, stdev=8131.01 00:39:13.412 clat (usec): min=11202, max=36933, avg=23650.74, stdev=2009.95 00:39:13.412 lat (usec): min=11208, max=36956, avg=23663.44, stdev=2010.17 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[15139], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:39:13.412 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:39:13.412 | 99.00th=[32375], 99.50th=[32637], 99.90th=[36439], 99.95th=[36439], 00:39:13.412 | 99.99th=[36963] 00:39:13.412 bw ( KiB/s): min= 2560, max= 2864, per=4.15%, avg=2690.95, stdev=70.06, samples=19 00:39:13.412 iops : min= 640, max= 716, avg=672.63, stdev=17.55, samples=19 00:39:13.412 lat (msec) : 20=3.83%, 50=96.17% 00:39:13.412 cpu : usr=98.65%, sys=0.89%, ctx=82, majf=0, minf=9 00:39:13.412 IO depths : 1=4.8%, 2=10.7%, 4=23.6%, 8=53.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769032: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=678, BW=2713KiB/s (2778kB/s)(26.5MiB/10004msec) 00:39:13.412 slat (nsec): min=5734, max=73532, avg=11122.23, stdev=8405.68 00:39:13.412 clat (usec): min=4371, max=25308, avg=23505.27, stdev=1878.53 00:39:13.412 lat (usec): min=4380, max=25321, avg=23516.39, stdev=1877.81 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[12256], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:39:13.412 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:39:13.412 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:39:13.412 | 99.99th=[25297] 00:39:13.412 bw ( KiB/s): min= 2682, max= 3200, per=4.19%, avg=2713.68, stdev=117.79, samples=19 00:39:13.412 iops : min= 670, max= 800, avg=678.32, stdev=29.48, samples=19 00:39:13.412 lat (msec) : 10=0.84%, 20=0.81%, 50=98.35% 00:39:13.412 cpu : usr=98.69%, sys=0.96%, ctx=69, majf=0, minf=9 00:39:13.412 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769033: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=673, BW=2693KiB/s (2758kB/s)(26.3MiB/10004msec) 00:39:13.412 slat (nsec): min=5655, max=83434, avg=22112.18, stdev=11647.03 00:39:13.412 clat (usec): min=5450, max=44500, avg=23561.41, stdev=1628.70 00:39:13.412 lat (usec): min=5456, max=44523, avg=23583.52, stdev=1629.14 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[21890], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.412 | 99.00th=[24773], 99.50th=[24773], 99.90th=[44303], 99.95th=[44303], 00:39:13.412 | 99.99th=[44303] 00:39:13.412 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2679.05, stdev=28.98, samples=19 00:39:13.412 iops : min= 640, max= 672, avg=669.58, stdev= 7.23, samples=19 00:39:13.412 lat (msec) : 10=0.48%, 20=0.24%, 50=99.29% 00:39:13.412 cpu : usr=98.33%, sys=1.22%, ctx=84, majf=0, minf=9 00:39:13.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769034: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=672, BW=2691KiB/s (2756kB/s)(26.3MiB/10011msec) 00:39:13.412 slat (nsec): min=5696, max=70377, avg=20756.52, stdev=11161.05 00:39:13.412 clat (usec): min=8686, max=33944, avg=23581.60, stdev=1094.81 00:39:13.412 lat (usec): min=8693, max=33966, avg=23602.36, stdev=1095.42 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[22414], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.412 | 99.00th=[24773], 99.50th=[25035], 99.90th=[33817], 99.95th=[33817], 00:39:13.412 | 99.99th=[33817] 00:39:13.412 bw ( KiB/s): min= 2560, max= 2810, per=4.15%, avg=2686.11, stdev=41.75, samples=19 00:39:13.412 iops : min= 640, max= 702, avg=671.37, stdev=10.37, samples=19 00:39:13.412 lat (msec) : 10=0.21%, 20=0.50%, 50=99.29% 00:39:13.412 cpu : usr=97.04%, sys=1.81%, ctx=1180, majf=0, minf=9 00:39:13.412 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769035: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=678, BW=2715KiB/s (2780kB/s)(26.6MiB/10015msec) 00:39:13.412 slat (nsec): min=5636, max=62938, avg=11498.62, stdev=8548.77 00:39:13.412 clat (usec): min=11656, max=38520, avg=23503.48, stdev=2872.84 00:39:13.412 lat (usec): min=11673, max=38551, avg=23514.98, stdev=2872.90 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[15533], 5.00th=[19006], 10.00th=[19268], 20.00th=[22152], 00:39:13.412 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[24249], 90.00th=[27132], 95.00th=[28705], 00:39:13.412 | 99.00th=[31589], 99.50th=[35390], 99.90th=[38011], 99.95th=[38536], 00:39:13.412 | 99.99th=[38536] 00:39:13.412 bw ( KiB/s): min= 2554, max= 2800, per=4.20%, avg=2717.47, stdev=55.57, samples=19 00:39:13.412 iops : min= 638, max= 700, avg=679.16, stdev=13.96, samples=19 00:39:13.412 lat (msec) : 20=13.74%, 50=86.26% 00:39:13.412 cpu : usr=98.65%, sys=0.99%, ctx=73, majf=0, minf=9 00:39:13.412 IO depths : 1=0.6%, 2=1.2%, 4=4.6%, 8=78.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=89.5%, 8=8.3%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769036: Tue Nov 26 07:48:40 2024 00:39:13.412 read: IOPS=681, BW=2728KiB/s (2793kB/s)(26.6MiB/10004msec) 00:39:13.412 slat (nsec): min=5495, max=74729, avg=15227.05, stdev=10246.73 00:39:13.412 clat (usec): min=4664, max=45026, avg=23357.30, stdev=3172.24 00:39:13.412 lat (usec): min=4670, max=45047, avg=23372.52, stdev=3172.95 00:39:13.412 clat percentiles (usec): 00:39:13.412 | 1.00th=[14615], 5.00th=[18482], 10.00th=[19530], 20.00th=[23200], 00:39:13.412 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.412 | 70.00th=[23725], 80.00th=[23987], 90.00th=[25822], 95.00th=[28181], 00:39:13.412 | 99.00th=[33424], 99.50th=[34866], 99.90th=[44827], 99.95th=[44827], 00:39:13.412 | 99.99th=[44827] 00:39:13.412 bw ( KiB/s): min= 2560, max= 2800, per=4.18%, avg=2706.84, stdev=61.87, samples=19 00:39:13.412 iops : min= 640, max= 700, avg=676.53, stdev=15.45, samples=19 00:39:13.412 lat (msec) : 10=0.41%, 20=12.21%, 50=87.38% 00:39:13.412 cpu : usr=98.85%, sys=0.80%, ctx=75, majf=0, minf=9 00:39:13.412 IO depths : 1=2.2%, 2=4.7%, 4=11.5%, 8=69.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:39:13.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 complete : 0=0.0%, 4=90.9%, 8=5.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.412 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.412 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.412 filename2: (groupid=0, jobs=1): err= 0: pid=1769037: Tue Nov 26 07:48:40 2024 00:39:13.413 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10006msec) 00:39:13.413 slat (nsec): min=5659, max=66180, avg=18466.97, stdev=10056.25 00:39:13.413 clat (usec): min=9581, max=36969, avg=23520.36, stdev=1279.03 00:39:13.413 lat (usec): min=9592, max=36981, avg=23538.82, stdev=1279.37 00:39:13.413 clat percentiles (usec): 00:39:13.413 | 1.00th=[17433], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:39:13.413 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:39:13.413 | 70.00th=[23725], 80.00th=[23725], 90.00th=[23987], 95.00th=[24249], 00:39:13.413 | 99.00th=[24773], 99.50th=[26870], 99.90th=[36963], 99.95th=[36963], 00:39:13.413 | 99.99th=[36963] 00:39:13.413 bw ( KiB/s): min= 2554, max= 2816, per=4.17%, avg=2702.95, stdev=59.42, samples=19 00:39:13.413 iops : min= 638, max= 704, avg=675.58, stdev=14.92, samples=19 00:39:13.413 lat (msec) : 10=0.03%, 20=2.20%, 50=97.77% 00:39:13.413 cpu : usr=98.93%, sys=0.81%, ctx=13, majf=0, minf=9 00:39:13.413 IO depths : 1=5.9%, 2=11.9%, 4=24.0%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:39:13.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.413 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.413 issued rwts: total=6760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.413 latency : target=0, window=0, percentile=100.00%, depth=16 00:39:13.413 00:39:13.413 Run status group 0 (all jobs): 00:39:13.413 READ: bw=63.2MiB/s (66.3MB/s), 2683KiB/s-2728KiB/s (2748kB/s-2793kB/s), io=633MiB (664MB), run=10001-10017msec 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 bdev_null0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 [2024-11-26 07:48:40.352326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 bdev_null1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:13.413 { 00:39:13.413 "params": { 00:39:13.413 "name": "Nvme$subsystem", 00:39:13.413 "trtype": "$TEST_TRANSPORT", 00:39:13.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.413 "adrfam": "ipv4", 00:39:13.413 "trsvcid": "$NVMF_PORT", 00:39:13.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.413 "hdgst": ${hdgst:-false}, 00:39:13.413 "ddgst": ${ddgst:-false} 00:39:13.413 }, 00:39:13.413 "method": "bdev_nvme_attach_controller" 00:39:13.413 } 00:39:13.413 EOF 00:39:13.413 )") 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:13.413 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:13.414 { 00:39:13.414 "params": { 00:39:13.414 "name": "Nvme$subsystem", 00:39:13.414 "trtype": "$TEST_TRANSPORT", 00:39:13.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:13.414 "adrfam": "ipv4", 00:39:13.414 "trsvcid": "$NVMF_PORT", 00:39:13.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:13.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:13.414 "hdgst": ${hdgst:-false}, 00:39:13.414 "ddgst": ${ddgst:-false} 00:39:13.414 }, 00:39:13.414 "method": "bdev_nvme_attach_controller" 00:39:13.414 } 00:39:13.414 EOF 00:39:13.414 )") 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:13.414 "params": { 00:39:13.414 "name": "Nvme0", 00:39:13.414 "trtype": "tcp", 00:39:13.414 "traddr": "10.0.0.2", 00:39:13.414 "adrfam": "ipv4", 00:39:13.414 "trsvcid": "4420", 00:39:13.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:13.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:13.414 "hdgst": false, 00:39:13.414 "ddgst": false 00:39:13.414 }, 00:39:13.414 "method": "bdev_nvme_attach_controller" 00:39:13.414 },{ 00:39:13.414 "params": { 00:39:13.414 "name": "Nvme1", 00:39:13.414 "trtype": "tcp", 00:39:13.414 "traddr": "10.0.0.2", 00:39:13.414 "adrfam": "ipv4", 00:39:13.414 "trsvcid": "4420", 00:39:13.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:13.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:13.414 "hdgst": false, 00:39:13.414 "ddgst": false 00:39:13.414 }, 00:39:13.414 "method": "bdev_nvme_attach_controller" 00:39:13.414 }' 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:13.414 07:48:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:13.414 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.414 ... 00:39:13.414 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:39:13.414 ... 00:39:13.414 fio-3.35 00:39:13.414 Starting 4 threads 00:39:18.707 00:39:18.707 filename0: (groupid=0, jobs=1): err= 0: pid=1771238: Tue Nov 26 07:48:46 2024 00:39:18.707 read: IOPS=2977, BW=23.3MiB/s (24.4MB/s)(116MiB/5003msec) 00:39:18.707 slat (nsec): min=5461, max=71185, avg=8620.18, stdev=2654.96 00:39:18.707 clat (usec): min=845, max=4041, avg=2664.60, stdev=130.57 00:39:18.707 lat (usec): min=861, max=4047, avg=2673.22, stdev=130.16 00:39:18.707 clat percentiles (usec): 00:39:18.707 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2606], 20.00th=[ 2638], 00:39:18.707 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:18.707 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2769], 00:39:18.707 | 99.00th=[ 2966], 99.50th=[ 3097], 99.90th=[ 3621], 99.95th=[ 3851], 00:39:18.707 | 99.99th=[ 4047] 00:39:18.707 bw ( KiB/s): min=23776, max=23984, per=25.12%, avg=23840.00, stdev=76.73, samples=9 00:39:18.707 iops : min= 2972, max= 2998, avg=2980.00, stdev= 9.59, samples=9 00:39:18.707 lat (usec) : 1000=0.01% 00:39:18.707 lat (msec) : 2=0.35%, 4=99.60%, 10=0.04% 00:39:18.707 cpu : usr=95.98%, sys=3.46%, ctx=144, majf=0, minf=9 00:39:18.707 IO depths : 1=0.1%, 2=0.1%, 4=70.0%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 issued rwts: total=14896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.707 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.707 filename0: (groupid=0, jobs=1): err= 0: pid=1771239: Tue Nov 26 07:48:46 2024 00:39:18.707 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5002msec) 00:39:18.707 slat (nsec): min=5457, max=66577, avg=6194.77, stdev=2118.33 00:39:18.707 clat (usec): min=1582, max=5437, avg=2692.89, stdev=185.58 00:39:18.707 lat (usec): min=1588, max=5462, avg=2699.09, stdev=185.74 00:39:18.707 clat percentiles (usec): 00:39:18.707 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2638], 00:39:18.707 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:18.707 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2835], 00:39:18.707 | 99.00th=[ 3785], 99.50th=[ 3884], 99.90th=[ 4178], 99.95th=[ 5342], 00:39:18.707 | 99.99th=[ 5407] 00:39:18.707 bw ( KiB/s): min=23310, max=23808, per=24.89%, avg=23614.00, stdev=139.21, samples=9 00:39:18.707 iops : min= 2913, max= 2976, avg=2951.67, stdev=17.61, samples=9 00:39:18.707 lat (msec) : 2=0.18%, 4=99.49%, 10=0.33% 00:39:18.707 cpu : usr=97.16%, sys=2.60%, ctx=7, majf=0, minf=9 00:39:18.707 IO depths : 1=0.1%, 2=0.1%, 4=72.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 issued rwts: total=14768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.707 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.707 filename1: (groupid=0, jobs=1): err= 0: pid=1771240: Tue Nov 26 07:48:46 2024 00:39:18.707 read: IOPS=2970, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:39:18.707 slat (nsec): min=5461, max=77510, avg=6208.68, stdev=2361.44 00:39:18.707 clat (usec): min=1317, max=4361, avg=2678.35, stdev=140.27 00:39:18.707 lat (usec): min=1326, max=4371, avg=2684.56, stdev=140.46 00:39:18.707 clat percentiles (usec): 00:39:18.707 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2671], 00:39:18.707 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:18.707 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2769], 00:39:18.707 | 99.00th=[ 3130], 99.50th=[ 3687], 99.90th=[ 4047], 99.95th=[ 4113], 00:39:18.707 | 99.99th=[ 4359] 00:39:18.707 bw ( KiB/s): min=23456, max=23904, per=25.04%, avg=23763.56, stdev=131.16, samples=9 00:39:18.707 iops : min= 2932, max= 2988, avg=2970.44, stdev=16.39, samples=9 00:39:18.707 lat (msec) : 2=0.15%, 4=99.71%, 10=0.14% 00:39:18.707 cpu : usr=96.00%, sys=3.76%, ctx=6, majf=0, minf=9 00:39:18.707 IO depths : 1=0.1%, 2=0.1%, 4=66.0%, 8=33.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 complete : 0=0.0%, 4=97.4%, 8=2.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 issued rwts: total=14860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.707 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.707 filename1: (groupid=0, jobs=1): err= 0: pid=1771241: Tue Nov 26 07:48:46 2024 00:39:18.707 read: IOPS=2962, BW=23.1MiB/s (24.3MB/s)(116MiB/5002msec) 00:39:18.707 slat (nsec): min=5457, max=69967, avg=6170.46, stdev=2155.31 00:39:18.707 clat (usec): min=1198, max=5305, avg=2682.81, stdev=152.75 00:39:18.707 lat (usec): min=1203, max=5332, avg=2688.98, stdev=153.00 00:39:18.707 clat percentiles (usec): 00:39:18.707 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2638], 20.00th=[ 2638], 00:39:18.707 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:39:18.707 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2704], 95.00th=[ 2737], 00:39:18.707 | 99.00th=[ 3195], 99.50th=[ 3851], 99.90th=[ 4424], 99.95th=[ 5211], 00:39:18.707 | 99.99th=[ 5276] 00:39:18.707 bw ( KiB/s): min=23198, max=23808, per=24.96%, avg=23683.33, stdev=188.90, samples=9 00:39:18.707 iops : min= 2899, max= 2976, avg=2960.33, stdev=23.85, samples=9 00:39:18.707 lat (msec) : 2=0.16%, 4=99.52%, 10=0.32% 00:39:18.707 cpu : usr=96.44%, sys=3.32%, ctx=7, majf=0, minf=9 00:39:18.707 IO depths : 1=0.1%, 2=0.1%, 4=74.6%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:18.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:18.707 issued rwts: total=14817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:18.707 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:18.707 00:39:18.707 Run status group 0 (all jobs): 00:39:18.707 READ: bw=92.7MiB/s (97.2MB/s), 23.1MiB/s-23.3MiB/s (24.2MB/s-24.4MB/s), io=464MiB (486MB), run=5002-5003msec 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.707 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.708 00:39:18.708 real 0m24.661s 00:39:18.708 user 5m17.882s 00:39:18.708 sys 0m4.829s 00:39:18.708 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 ************************************ 00:39:18.708 END TEST fio_dif_rand_params 00:39:18.708 ************************************ 00:39:18.708 07:48:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:18.708 07:48:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:18.708 07:48:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 ************************************ 00:39:18.708 START TEST fio_dif_digest 00:39:18.708 ************************************ 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 bdev_null0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:18.708 [2024-11-26 07:48:46.773425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:18.708 { 00:39:18.708 "params": { 00:39:18.708 "name": "Nvme$subsystem", 00:39:18.708 "trtype": "$TEST_TRANSPORT", 00:39:18.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:18.708 "adrfam": "ipv4", 00:39:18.708 "trsvcid": "$NVMF_PORT", 00:39:18.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:18.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:18.708 "hdgst": ${hdgst:-false}, 00:39:18.708 "ddgst": ${ddgst:-false} 00:39:18.708 }, 00:39:18.708 "method": "bdev_nvme_attach_controller" 00:39:18.708 } 00:39:18.708 EOF 00:39:18.708 )") 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:39:18.708 07:48:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:18.708 "params": { 00:39:18.708 "name": "Nvme0", 00:39:18.708 "trtype": "tcp", 00:39:18.708 "traddr": "10.0.0.2", 00:39:18.708 "adrfam": "ipv4", 00:39:18.708 "trsvcid": "4420", 00:39:18.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:18.708 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:18.708 "hdgst": true, 00:39:18.708 "ddgst": true 00:39:18.708 }, 00:39:18.708 "method": "bdev_nvme_attach_controller" 00:39:18.708 }' 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:18.969 07:48:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:19.229 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:19.229 ... 00:39:19.229 fio-3.35 00:39:19.229 Starting 3 threads 00:39:31.461 00:39:31.461 filename0: (groupid=0, jobs=1): err= 0: pid=1772734: Tue Nov 26 07:48:57 2024 00:39:31.461 read: IOPS=303, BW=37.9MiB/s (39.8MB/s)(381MiB/10047msec) 00:39:31.461 slat (nsec): min=5888, max=31866, avg=6649.34, stdev=1118.42 00:39:31.461 clat (usec): min=6464, max=51010, avg=9859.35, stdev=1254.59 00:39:31.461 lat (usec): min=6476, max=51017, avg=9866.00, stdev=1254.50 00:39:31.461 clat percentiles (usec): 00:39:31.461 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:39:31.461 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:39:31.461 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:39:31.461 | 99.00th=[11731], 99.50th=[11863], 99.90th=[12387], 99.95th=[47449], 00:39:31.461 | 99.99th=[51119] 00:39:31.461 bw ( KiB/s): min=38400, max=39680, per=33.99%, avg=39014.40, stdev=393.10, samples=20 00:39:31.461 iops : min= 300, max= 310, avg=304.80, stdev= 3.07, samples=20 00:39:31.461 lat (msec) : 10=59.61%, 20=40.33%, 50=0.03%, 100=0.03% 00:39:31.461 cpu : usr=94.05%, sys=5.71%, ctx=18, majf=0, minf=103 00:39:31.461 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.461 issued rwts: total=3050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.461 filename0: (groupid=0, jobs=1): err= 0: pid=1772735: Tue Nov 26 07:48:57 2024 00:39:31.461 read: IOPS=291, BW=36.5MiB/s (38.3MB/s)(367MiB/10045msec) 00:39:31.461 slat (nsec): min=5905, max=31860, avg=6791.11, stdev=1117.58 00:39:31.461 clat (usec): min=7784, max=50512, avg=10251.00, stdev=1301.91 00:39:31.461 lat (usec): min=7791, max=50519, avg=10257.79, stdev=1301.97 00:39:31.461 clat percentiles (usec): 00:39:31.461 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:39:31.461 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:39:31.461 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:39:31.461 | 99.00th=[12387], 99.50th=[12518], 99.90th=[15664], 99.95th=[46400], 00:39:31.461 | 99.99th=[50594] 00:39:31.461 bw ( KiB/s): min=36352, max=39168, per=32.69%, avg=37516.80, stdev=629.68, samples=20 00:39:31.461 iops : min= 284, max= 306, avg=293.10, stdev= 4.92, samples=20 00:39:31.461 lat (msec) : 10=40.47%, 20=59.46%, 50=0.03%, 100=0.03% 00:39:31.461 cpu : usr=94.72%, sys=5.05%, ctx=16, majf=0, minf=197 00:39:31.461 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.461 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.461 filename0: (groupid=0, jobs=1): err= 0: pid=1772736: Tue Nov 26 07:48:57 2024 00:39:31.461 read: IOPS=301, BW=37.7MiB/s (39.5MB/s)(378MiB/10048msec) 00:39:31.461 slat (nsec): min=5951, max=32555, avg=6674.09, stdev=952.75 00:39:31.461 clat (usec): min=7323, max=50116, avg=9935.38, stdev=1250.91 00:39:31.461 lat (usec): min=7329, max=50122, avg=9942.06, stdev=1250.92 00:39:31.461 clat percentiles (usec): 00:39:31.461 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:39:31.462 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:39:31.462 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:39:31.462 | 99.00th=[11731], 99.50th=[11994], 99.90th=[12256], 99.95th=[47973], 00:39:31.462 | 99.99th=[50070] 00:39:31.462 bw ( KiB/s): min=38144, max=39680, per=33.73%, avg=38720.00, stdev=468.92, samples=20 00:39:31.462 iops : min= 298, max= 310, avg=302.50, stdev= 3.66, samples=20 00:39:31.462 lat (msec) : 10=55.07%, 20=44.86%, 50=0.03%, 100=0.03% 00:39:31.462 cpu : usr=94.31%, sys=5.45%, ctx=16, majf=0, minf=107 00:39:31.462 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:31.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:31.462 issued rwts: total=3027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:31.462 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:31.462 00:39:31.462 Run status group 0 (all jobs): 00:39:31.462 READ: bw=112MiB/s (118MB/s), 36.5MiB/s-37.9MiB/s (38.3MB/s-39.8MB/s), io=1126MiB (1181MB), run=10045-10048msec 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:31.462 00:39:31.462 real 0m11.267s 00:39:31.462 user 0m42.362s 00:39:31.462 sys 0m1.965s 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.462 07:48:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:31.462 ************************************ 00:39:31.462 END TEST fio_dif_digest 00:39:31.462 ************************************ 00:39:31.462 07:48:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:31.462 07:48:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:31.462 rmmod nvme_tcp 00:39:31.462 rmmod nvme_fabrics 00:39:31.462 rmmod nvme_keyring 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1761702 ']' 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1761702 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1761702 ']' 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1761702 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761702 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761702' 00:39:31.462 killing process with pid 1761702 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1761702 00:39:31.462 07:48:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1761702 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:31.462 07:48:58 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:34.008 Waiting for block devices as requested 00:39:34.008 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:34.008 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:34.268 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:34.268 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:34.530 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:34.530 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:34.530 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:34.792 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:34.792 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:34.792 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:35.054 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:35.054 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:35.315 07:49:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.315 07:49:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:35.315 07:49:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.858 07:49:05 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:37.858 00:39:37.858 real 1m18.423s 00:39:37.858 user 8m4.912s 00:39:37.858 sys 0m22.382s 00:39:37.858 07:49:05 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:37.858 07:49:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:37.858 ************************************ 00:39:37.858 END TEST nvmf_dif 00:39:37.858 ************************************ 00:39:37.858 07:49:05 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.858 07:49:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:37.858 07:49:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:37.858 07:49:05 -- common/autotest_common.sh@10 -- # set +x 00:39:37.858 ************************************ 00:39:37.858 START TEST nvmf_abort_qd_sizes 00:39:37.858 ************************************ 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:37.858 * Looking for test storage... 00:39:37.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:37.858 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:37.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.858 --rc genhtml_branch_coverage=1 00:39:37.858 --rc genhtml_function_coverage=1 00:39:37.859 --rc genhtml_legend=1 00:39:37.859 --rc geninfo_all_blocks=1 00:39:37.859 --rc geninfo_unexecuted_blocks=1 00:39:37.859 00:39:37.859 ' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.859 --rc genhtml_branch_coverage=1 00:39:37.859 --rc genhtml_function_coverage=1 00:39:37.859 --rc genhtml_legend=1 00:39:37.859 --rc geninfo_all_blocks=1 00:39:37.859 --rc geninfo_unexecuted_blocks=1 00:39:37.859 00:39:37.859 ' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.859 --rc genhtml_branch_coverage=1 00:39:37.859 --rc genhtml_function_coverage=1 00:39:37.859 --rc genhtml_legend=1 00:39:37.859 --rc geninfo_all_blocks=1 00:39:37.859 --rc geninfo_unexecuted_blocks=1 00:39:37.859 00:39:37.859 ' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:37.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:37.859 --rc genhtml_branch_coverage=1 00:39:37.859 --rc genhtml_function_coverage=1 00:39:37.859 --rc genhtml_legend=1 00:39:37.859 --rc geninfo_all_blocks=1 00:39:37.859 --rc geninfo_unexecuted_blocks=1 00:39:37.859 00:39:37.859 ' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:37.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:39:37.859 07:49:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:46.009 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:46.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:46.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:46.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:46.010 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:46.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:46.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:39:46.010 00:39:46.010 --- 10.0.0.2 ping statistics --- 00:39:46.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.010 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:46.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:46.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:39:46.010 00:39:46.010 --- 10.0.0.1 ping statistics --- 00:39:46.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:46.010 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:46.010 07:49:12 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:48.555 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:48.555 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:48.815 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1782169 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1782169 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1782169 ']' 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:49.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:49.135 07:49:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:49.135 [2024-11-26 07:49:16.989873] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:39:49.135 [2024-11-26 07:49:16.989923] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:49.135 [2024-11-26 07:49:17.084858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:49.135 [2024-11-26 07:49:17.138256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:49.135 [2024-11-26 07:49:17.138309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:49.135 [2024-11-26 07:49:17.138319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:49.135 [2024-11-26 07:49:17.138326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:49.135 [2024-11-26 07:49:17.138332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:49.135 [2024-11-26 07:49:17.140314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:49.135 [2024-11-26 07:49:17.140476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:49.135 [2024-11-26 07:49:17.140611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:49.135 [2024-11-26 07:49:17.140612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:49.764 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:50.025 07:49:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:50.025 ************************************ 00:39:50.025 START TEST spdk_target_abort 00:39:50.025 ************************************ 00:39:50.025 07:49:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:39:50.025 07:49:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:50.025 07:49:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:50.025 07:49:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.025 07:49:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.286 spdk_targetn1 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.286 [2024-11-26 07:49:18.224036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:50.286 [2024-11-26 07:49:18.272376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:50.286 07:49:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:50.546 [2024-11-26 07:49:18.461638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:288 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:50.546 [2024-11-26 07:49:18.461675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:39:50.546 [2024-11-26 07:49:18.533704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2528 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:39:50.546 [2024-11-26 07:49:18.533728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:50.546 [2024-11-26 07:49:18.533816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2552 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:50.546 [2024-11-26 07:49:18.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:39:50.546 [2024-11-26 07:49:18.565647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3616 len:8 PRP1 0x200004abe000 PRP2 0x0 00:39:50.546 [2024-11-26 07:49:18.565668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c6 p:0 m:0 dnr:0 00:39:50.546 [2024-11-26 07:49:18.573662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3880 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:39:50.546 [2024-11-26 07:49:18.573681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e8 p:0 m:0 dnr:0 00:39:53.845 Initializing NVMe Controllers 00:39:53.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:53.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:53.845 Initialization complete. Launching workers. 00:39:53.845 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11800, failed: 5 00:39:53.845 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2597, failed to submit 9208 00:39:53.845 success 722, unsuccessful 1875, failed 0 00:39:53.845 07:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:53.845 07:49:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:53.845 [2024-11-26 07:49:21.785805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.785850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:003b p:1 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.818086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:1080 len:8 PRP1 0x200004e56000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.818113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:008c p:1 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.834284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1376 len:8 PRP1 0x200004e42000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.834311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00bb p:1 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.842248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:1576 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.842270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.858374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:1960 len:8 PRP1 0x200004e58000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.858398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.882129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2536 len:8 PRP1 0x200004e54000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.882151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:39:53.845 [2024-11-26 07:49:21.928309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:3608 len:8 PRP1 0x200004e54000 PRP2 0x0 00:39:53.845 [2024-11-26 07:49:21.928331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00cc p:0 m:0 dnr:0 00:39:57.145 [2024-11-26 07:49:24.571037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:62872 len:8 PRP1 0x200004e5e000 PRP2 0x0 00:39:57.146 [2024-11-26 07:49:24.571070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00b5 p:1 m:0 dnr:0 00:39:57.146 Initializing NVMe Controllers 00:39:57.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:57.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:57.146 Initialization complete. Launching workers. 00:39:57.146 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8420, failed: 8 00:39:57.146 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7185 00:39:57.146 success 345, unsuccessful 898, failed 0 00:39:57.146 07:49:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:57.146 07:49:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:59.059 [2024-11-26 07:49:26.663568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:157 nsid:1 lba:184256 len:8 PRP1 0x200004ae2000 PRP2 0x0 00:39:59.059 [2024-11-26 07:49:26.663596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:157 cdw0:0 sqhd:0017 p:1 m:0 dnr:0 00:39:59.320 [2024-11-26 07:49:27.244777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:164 nsid:1 lba:251704 len:8 PRP1 0x200004ae8000 PRP2 0x0 00:39:59.320 [2024-11-26 07:49:27.244800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:164 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:39:59.889 [2024-11-26 07:49:27.974604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:151 nsid:1 lba:337168 len:8 PRP1 0x200004b22000 PRP2 0x0 00:39:59.889 [2024-11-26 07:49:27.974626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:151 cdw0:0 sqhd:00c4 p:1 m:0 dnr:0 00:40:00.150 Initializing NVMe Controllers 00:40:00.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:00.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:00.150 Initialization complete. Launching workers. 00:40:00.150 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43953, failed: 3 00:40:00.150 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2709, failed to submit 41247 00:40:00.150 success 592, unsuccessful 2117, failed 0 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:00.150 07:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1782169 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1782169 ']' 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1782169 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:02.065 07:49:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1782169 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1782169' 00:40:02.065 killing process with pid 1782169 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1782169 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1782169 00:40:02.065 00:40:02.065 real 0m12.234s 00:40:02.065 user 0m49.906s 00:40:02.065 sys 0m2.007s 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:02.065 07:49:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:02.065 ************************************ 00:40:02.065 END TEST spdk_target_abort 00:40:02.065 ************************************ 00:40:02.326 07:49:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:02.326 07:49:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:02.326 07:49:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:02.326 07:49:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:02.326 ************************************ 00:40:02.326 START TEST kernel_target_abort 00:40:02.326 ************************************ 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:02.326 07:49:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:05.624 Waiting for block devices as requested 00:40:05.624 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:05.624 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:05.885 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:05.885 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:05.885 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:06.146 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:06.146 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:06.146 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:06.146 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:06.407 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:06.407 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:06.667 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:06.667 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:06.667 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:06.927 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:06.927 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:06.927 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:40:07.499 No valid GPT data, bailing 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:07.499 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:40:07.500 00:40:07.500 Discovery Log Number of Records 2, Generation counter 2 00:40:07.500 =====Discovery Log Entry 0====== 00:40:07.500 trtype: tcp 00:40:07.500 adrfam: ipv4 00:40:07.500 subtype: current discovery subsystem 00:40:07.500 treq: not specified, sq flow control disable supported 00:40:07.500 portid: 1 00:40:07.500 trsvcid: 4420 00:40:07.500 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:07.500 traddr: 10.0.0.1 00:40:07.500 eflags: none 00:40:07.500 sectype: none 00:40:07.500 =====Discovery Log Entry 1====== 00:40:07.500 trtype: tcp 00:40:07.500 adrfam: ipv4 00:40:07.500 subtype: nvme subsystem 00:40:07.500 treq: not specified, sq flow control disable supported 00:40:07.500 portid: 1 00:40:07.500 trsvcid: 4420 00:40:07.500 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:07.500 traddr: 10.0.0.1 00:40:07.500 eflags: none 00:40:07.500 sectype: none 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:07.500 07:49:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:10.798 Initializing NVMe Controllers 00:40:10.798 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:10.798 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:10.798 Initialization complete. Launching workers. 00:40:10.798 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66892, failed: 0 00:40:10.798 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66892, failed to submit 0 00:40:10.798 success 0, unsuccessful 66892, failed 0 00:40:10.798 07:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:10.798 07:49:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:14.098 Initializing NVMe Controllers 00:40:14.098 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:14.098 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:14.098 Initialization complete. Launching workers. 00:40:14.098 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119183, failed: 0 00:40:14.098 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29990, failed to submit 89193 00:40:14.098 success 0, unsuccessful 29990, failed 0 00:40:14.098 07:49:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:14.098 07:49:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:17.397 Initializing NVMe Controllers 00:40:17.397 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:40:17.397 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:17.397 Initialization complete. Launching workers. 00:40:17.397 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146241, failed: 0 00:40:17.397 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36614, failed to submit 109627 00:40:17.397 success 0, unsuccessful 36614, failed 0 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:40:17.397 07:49:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:20.694 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:40:20.694 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:40:22.609 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:40:22.609 00:40:22.609 real 0m20.417s 00:40:22.609 user 0m9.845s 00:40:22.609 sys 0m6.169s 00:40:22.609 07:49:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.609 07:49:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:22.609 ************************************ 00:40:22.609 END TEST kernel_target_abort 00:40:22.609 ************************************ 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:22.609 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:22.870 rmmod nvme_tcp 00:40:22.870 rmmod nvme_fabrics 00:40:22.870 rmmod nvme_keyring 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1782169 ']' 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1782169 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1782169 ']' 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1782169 00:40:22.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1782169) - No such process 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1782169 is not found' 00:40:22.870 Process with pid 1782169 is not found 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:22.870 07:49:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:40:26.174 Waiting for block devices as requested 00:40:26.174 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:26.436 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:26.436 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:26.436 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:26.697 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:26.697 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:26.697 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:26.958 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:26.958 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:40:27.218 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:40:27.218 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:40:27.218 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:40:27.478 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:40:27.478 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:40:27.478 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:40:27.739 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:40:27.739 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:28.000 07:49:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.544 07:49:58 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:30.544 00:40:30.544 real 0m52.597s 00:40:30.544 user 1m5.120s 00:40:30.544 sys 0m19.368s 00:40:30.544 07:49:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.544 07:49:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:30.544 ************************************ 00:40:30.544 END TEST nvmf_abort_qd_sizes 00:40:30.544 ************************************ 00:40:30.544 07:49:58 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:30.544 07:49:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:30.544 07:49:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.544 07:49:58 -- common/autotest_common.sh@10 -- # set +x 00:40:30.544 ************************************ 00:40:30.544 START TEST keyring_file 00:40:30.544 ************************************ 00:40:30.544 07:49:58 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:30.544 * Looking for test storage... 00:40:30.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:30.544 07:49:58 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:30.544 07:49:58 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:40:30.544 07:49:58 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:30.544 07:49:58 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@345 -- # : 1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@353 -- # local d=1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@355 -- # echo 1 00:40:30.544 07:49:58 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@353 -- # local d=2 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@355 -- # echo 2 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@368 -- # return 0 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.545 --rc genhtml_branch_coverage=1 00:40:30.545 --rc genhtml_function_coverage=1 00:40:30.545 --rc genhtml_legend=1 00:40:30.545 --rc geninfo_all_blocks=1 00:40:30.545 --rc geninfo_unexecuted_blocks=1 00:40:30.545 00:40:30.545 ' 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.545 --rc genhtml_branch_coverage=1 00:40:30.545 --rc genhtml_function_coverage=1 00:40:30.545 --rc genhtml_legend=1 00:40:30.545 --rc geninfo_all_blocks=1 00:40:30.545 --rc geninfo_unexecuted_blocks=1 00:40:30.545 00:40:30.545 ' 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.545 --rc genhtml_branch_coverage=1 00:40:30.545 --rc genhtml_function_coverage=1 00:40:30.545 --rc genhtml_legend=1 00:40:30.545 --rc geninfo_all_blocks=1 00:40:30.545 --rc geninfo_unexecuted_blocks=1 00:40:30.545 00:40:30.545 ' 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:30.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.545 --rc genhtml_branch_coverage=1 00:40:30.545 --rc genhtml_function_coverage=1 00:40:30.545 --rc genhtml_legend=1 00:40:30.545 --rc geninfo_all_blocks=1 00:40:30.545 --rc geninfo_unexecuted_blocks=1 00:40:30.545 00:40:30.545 ' 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.545 07:49:58 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.545 07:49:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.545 07:49:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.545 07:49:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.545 07:49:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:30.545 07:49:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@51 -- # : 0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:30.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KxAx3wjNiM 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KxAx3wjNiM 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KxAx3wjNiM 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.KxAx3wjNiM 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I6LgpzDXSu 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:30.545 07:49:58 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I6LgpzDXSu 00:40:30.545 07:49:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I6LgpzDXSu 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.I6LgpzDXSu 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=1792699 00:40:30.545 07:49:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1792699 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1792699 ']' 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:30.545 07:49:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.546 07:49:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:30.546 07:49:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:30.546 [2024-11-26 07:49:58.556491] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:40:30.546 [2024-11-26 07:49:58.556553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792699 ] 00:40:30.806 [2024-11-26 07:49:58.641882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.806 [2024-11-26 07:49:58.695134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:31.377 07:49:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:31.377 [2024-11-26 07:49:59.394240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.377 null0 00:40:31.377 [2024-11-26 07:49:59.426286] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:31.377 [2024-11-26 07:49:59.426789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.377 07:49:59 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:31.377 [2024-11-26 07:49:59.458348] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:31.377 request: 00:40:31.377 { 00:40:31.377 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:31.377 "secure_channel": false, 00:40:31.377 "listen_address": { 00:40:31.377 "trtype": "tcp", 00:40:31.377 "traddr": "127.0.0.1", 00:40:31.377 "trsvcid": "4420" 00:40:31.377 }, 00:40:31.377 "method": "nvmf_subsystem_add_listener", 00:40:31.377 "req_id": 1 00:40:31.377 } 00:40:31.377 Got JSON-RPC error response 00:40:31.377 response: 00:40:31.377 { 00:40:31.377 "code": -32602, 00:40:31.377 "message": "Invalid parameters" 00:40:31.377 } 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:31.377 07:49:59 keyring_file -- keyring/file.sh@47 -- # bperfpid=1792735 00:40:31.377 07:49:59 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1792735 /var/tmp/bperf.sock 00:40:31.377 07:49:59 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:31.377 07:49:59 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1792735 ']' 00:40:31.638 07:49:59 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:31.638 07:49:59 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:31.638 07:49:59 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:31.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:31.638 07:49:59 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:31.638 07:49:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:31.638 [2024-11-26 07:49:59.518612] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:40:31.638 [2024-11-26 07:49:59.518678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792735 ] 00:40:31.638 [2024-11-26 07:49:59.608825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:31.638 [2024-11-26 07:49:59.661881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.579 07:50:00 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:32.579 07:50:00 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:32.579 07:50:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:32.579 07:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:32.579 07:50:00 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.I6LgpzDXSu 00:40:32.580 07:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.I6LgpzDXSu 00:40:32.839 07:50:00 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:40:32.839 07:50:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:32.839 07:50:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.KxAx3wjNiM == \/\t\m\p\/\t\m\p\.\K\x\A\x\3\w\j\N\i\M ]] 00:40:32.839 07:50:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:40:32.839 07:50:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:32.839 07:50:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.101 07:50:01 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.I6LgpzDXSu == \/\t\m\p\/\t\m\p\.\I\6\L\g\p\z\D\X\S\u ]] 00:40:33.101 07:50:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:40:33.101 07:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.101 07:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.101 07:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.101 07:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.101 07:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.362 07:50:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:33.362 07:50:01 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:40:33.362 07:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:33.362 07:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.362 07:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.362 07:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.362 07:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.623 07:50:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:40:33.623 07:50:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.623 07:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:33.623 [2024-11-26 07:50:01.634281] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:33.623 nvme0n1 00:40:33.884 07:50:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:33.884 07:50:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:40:33.884 07:50:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:33.884 07:50:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:34.146 07:50:02 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:40:34.146 07:50:02 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:34.146 Running I/O for 1 seconds... 00:40:35.532 17466.00 IOPS, 68.23 MiB/s 00:40:35.532 Latency(us) 00:40:35.532 [2024-11-26T06:50:03.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:35.532 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:35.532 nvme0n1 : 1.00 17526.10 68.46 0.00 0.00 7290.24 2416.64 14745.60 00:40:35.532 [2024-11-26T06:50:03.630Z] =================================================================================================================== 00:40:35.532 [2024-11-26T06:50:03.630Z] Total : 17526.10 68.46 0.00 0.00 7290.24 2416.64 14745.60 00:40:35.532 { 00:40:35.532 "results": [ 00:40:35.532 { 00:40:35.532 "job": "nvme0n1", 00:40:35.532 "core_mask": "0x2", 00:40:35.532 "workload": "randrw", 00:40:35.532 "percentage": 50, 00:40:35.532 "status": "finished", 00:40:35.532 "queue_depth": 128, 00:40:35.532 "io_size": 4096, 00:40:35.532 "runtime": 1.003931, 00:40:35.532 "iops": 17526.104881709998, 00:40:35.532 "mibps": 68.46134719417968, 00:40:35.532 "io_failed": 0, 00:40:35.532 "io_timeout": 0, 00:40:35.532 "avg_latency_us": 7290.242534053234, 00:40:35.532 "min_latency_us": 2416.64, 00:40:35.532 "max_latency_us": 14745.6 00:40:35.532 } 00:40:35.532 ], 00:40:35.532 "core_count": 1 00:40:35.532 } 00:40:35.532 07:50:03 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:35.532 07:50:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:35.532 07:50:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:35.532 07:50:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:35.532 07:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:35.794 07:50:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:40:35.794 07:50:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:35.794 07:50:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:35.794 07:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:36.054 [2024-11-26 07:50:03.949686] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:36.054 [2024-11-26 07:50:03.950277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d21c30 (107): Transport endpoint is not connected 00:40:36.054 [2024-11-26 07:50:03.951272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d21c30 (9): Bad file descriptor 00:40:36.054 [2024-11-26 07:50:03.952274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:36.054 [2024-11-26 07:50:03.952282] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:36.054 [2024-11-26 07:50:03.952288] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:36.054 [2024-11-26 07:50:03.952294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:36.054 request: 00:40:36.054 { 00:40:36.054 "name": "nvme0", 00:40:36.054 "trtype": "tcp", 00:40:36.054 "traddr": "127.0.0.1", 00:40:36.054 "adrfam": "ipv4", 00:40:36.054 "trsvcid": "4420", 00:40:36.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:36.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:36.054 "prchk_reftag": false, 00:40:36.054 "prchk_guard": false, 00:40:36.054 "hdgst": false, 00:40:36.054 "ddgst": false, 00:40:36.054 "psk": "key1", 00:40:36.054 "allow_unrecognized_csi": false, 00:40:36.054 "method": "bdev_nvme_attach_controller", 00:40:36.054 "req_id": 1 00:40:36.054 } 00:40:36.054 Got JSON-RPC error response 00:40:36.054 response: 00:40:36.054 { 00:40:36.054 "code": -5, 00:40:36.054 "message": "Input/output error" 00:40:36.054 } 00:40:36.054 07:50:03 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:36.054 07:50:03 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:36.054 07:50:03 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:36.054 07:50:03 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:36.054 07:50:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:40:36.054 07:50:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:36.054 07:50:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:36.054 07:50:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:36.054 07:50:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:36.054 07:50:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:36.316 07:50:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:36.316 07:50:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:36.316 07:50:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:40:36.316 07:50:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:40:36.316 07:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:36.575 07:50:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:40:36.575 07:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:36.835 07:50:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:40:36.835 07:50:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:40:36.835 07:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:36.835 07:50:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:40:36.835 07:50:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.KxAx3wjNiM 00:40:36.835 07:50:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:36.835 07:50:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:36.835 07:50:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:36.836 07:50:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:36.836 07:50:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.836 07:50:04 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:36.836 07:50:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:36.836 07:50:04 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:36.836 07:50:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:37.096 [2024-11-26 07:50:05.056144] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KxAx3wjNiM': 0100660 00:40:37.096 [2024-11-26 07:50:05.056165] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:37.096 request: 00:40:37.096 { 00:40:37.096 "name": "key0", 00:40:37.096 "path": "/tmp/tmp.KxAx3wjNiM", 00:40:37.096 "method": "keyring_file_add_key", 00:40:37.096 "req_id": 1 00:40:37.096 } 00:40:37.096 Got JSON-RPC error response 00:40:37.096 response: 00:40:37.096 { 00:40:37.096 "code": -1, 00:40:37.096 "message": "Operation not permitted" 00:40:37.096 } 00:40:37.096 07:50:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:37.096 07:50:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:37.096 07:50:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:37.096 07:50:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:37.096 07:50:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.KxAx3wjNiM 00:40:37.096 07:50:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:37.096 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KxAx3wjNiM 00:40:37.357 07:50:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.KxAx3wjNiM 00:40:37.357 07:50:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:37.357 07:50:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:40:37.357 07:50:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:37.357 07:50:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:37.357 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:37.619 [2024-11-26 07:50:05.593510] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.KxAx3wjNiM': No such file or directory 00:40:37.619 [2024-11-26 07:50:05.593525] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:37.619 [2024-11-26 07:50:05.593538] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:37.619 [2024-11-26 07:50:05.593544] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:40:37.619 [2024-11-26 07:50:05.593550] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:37.619 [2024-11-26 07:50:05.593556] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:37.619 request: 00:40:37.619 { 00:40:37.619 "name": "nvme0", 00:40:37.619 "trtype": "tcp", 00:40:37.619 "traddr": "127.0.0.1", 00:40:37.619 "adrfam": "ipv4", 00:40:37.619 "trsvcid": "4420", 00:40:37.619 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:37.619 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:37.619 "prchk_reftag": false, 00:40:37.619 "prchk_guard": false, 00:40:37.619 "hdgst": false, 00:40:37.619 "ddgst": false, 00:40:37.619 "psk": "key0", 00:40:37.619 "allow_unrecognized_csi": false, 00:40:37.619 "method": "bdev_nvme_attach_controller", 00:40:37.619 "req_id": 1 00:40:37.619 } 00:40:37.619 Got JSON-RPC error response 00:40:37.619 response: 00:40:37.619 { 00:40:37.619 "code": -19, 00:40:37.619 "message": "No such device" 00:40:37.619 } 00:40:37.619 07:50:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:40:37.619 07:50:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:37.619 07:50:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:37.619 07:50:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:37.619 07:50:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:40:37.619 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:37.879 07:50:05 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VQU7u60wk1 00:40:37.879 07:50:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:37.879 07:50:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:37.879 07:50:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:40:37.880 07:50:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:37.880 07:50:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:37.880 07:50:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:40:37.880 07:50:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:40:37.880 07:50:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VQU7u60wk1 00:40:37.880 07:50:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VQU7u60wk1 00:40:37.880 07:50:05 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VQU7u60wk1 00:40:37.880 07:50:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VQU7u60wk1 00:40:37.880 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VQU7u60wk1 00:40:38.140 07:50:05 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:38.140 07:50:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:38.140 nvme0n1 00:40:38.400 07:50:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.400 07:50:06 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:40:38.400 07:50:06 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:40:38.400 07:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:38.661 07:50:06 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:40:38.661 07:50:06 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:40:38.661 07:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:38.661 07:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:38.661 07:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.921 07:50:06 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:40:38.921 07:50:06 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:38.921 07:50:06 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:40:38.921 07:50:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:38.921 07:50:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:39.180 07:50:07 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:40:39.180 07:50:07 keyring_file -- keyring/file.sh@105 -- # jq length 00:40:39.180 07:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:39.439 07:50:07 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:40:39.439 07:50:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VQU7u60wk1 00:40:39.439 07:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VQU7u60wk1 00:40:39.439 07:50:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.I6LgpzDXSu 00:40:39.439 07:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.I6LgpzDXSu 00:40:39.700 07:50:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:39.700 07:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:39.960 nvme0n1 00:40:39.960 07:50:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:40:39.960 07:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:40.220 07:50:08 keyring_file -- keyring/file.sh@113 -- # config='{ 00:40:40.220 "subsystems": [ 00:40:40.220 { 00:40:40.221 "subsystem": "keyring", 00:40:40.221 "config": [ 00:40:40.221 { 00:40:40.221 "method": "keyring_file_add_key", 00:40:40.221 "params": { 00:40:40.221 "name": "key0", 00:40:40.221 "path": "/tmp/tmp.VQU7u60wk1" 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "keyring_file_add_key", 00:40:40.221 "params": { 00:40:40.221 "name": "key1", 00:40:40.221 "path": "/tmp/tmp.I6LgpzDXSu" 00:40:40.221 } 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "iobuf", 00:40:40.221 "config": [ 00:40:40.221 { 00:40:40.221 "method": "iobuf_set_options", 00:40:40.221 "params": { 00:40:40.221 "small_pool_count": 8192, 00:40:40.221 "large_pool_count": 1024, 00:40:40.221 "small_bufsize": 8192, 00:40:40.221 "large_bufsize": 135168, 00:40:40.221 "enable_numa": false 00:40:40.221 } 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "sock", 00:40:40.221 "config": [ 00:40:40.221 { 00:40:40.221 "method": "sock_set_default_impl", 00:40:40.221 "params": { 00:40:40.221 "impl_name": "posix" 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "sock_impl_set_options", 00:40:40.221 "params": { 00:40:40.221 "impl_name": "ssl", 00:40:40.221 "recv_buf_size": 4096, 00:40:40.221 "send_buf_size": 4096, 00:40:40.221 "enable_recv_pipe": true, 00:40:40.221 "enable_quickack": false, 00:40:40.221 "enable_placement_id": 0, 00:40:40.221 "enable_zerocopy_send_server": true, 00:40:40.221 "enable_zerocopy_send_client": false, 00:40:40.221 "zerocopy_threshold": 0, 00:40:40.221 "tls_version": 0, 00:40:40.221 "enable_ktls": false 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "sock_impl_set_options", 00:40:40.221 "params": { 00:40:40.221 "impl_name": "posix", 00:40:40.221 "recv_buf_size": 2097152, 00:40:40.221 "send_buf_size": 2097152, 00:40:40.221 "enable_recv_pipe": true, 00:40:40.221 "enable_quickack": false, 00:40:40.221 "enable_placement_id": 0, 00:40:40.221 "enable_zerocopy_send_server": true, 00:40:40.221 "enable_zerocopy_send_client": false, 00:40:40.221 "zerocopy_threshold": 0, 00:40:40.221 "tls_version": 0, 00:40:40.221 "enable_ktls": false 00:40:40.221 } 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "vmd", 00:40:40.221 "config": [] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "accel", 00:40:40.221 "config": [ 00:40:40.221 { 00:40:40.221 "method": "accel_set_options", 00:40:40.221 "params": { 00:40:40.221 "small_cache_size": 128, 00:40:40.221 "large_cache_size": 16, 00:40:40.221 "task_count": 2048, 00:40:40.221 "sequence_count": 2048, 00:40:40.221 "buf_count": 2048 00:40:40.221 } 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "bdev", 00:40:40.221 "config": [ 00:40:40.221 { 00:40:40.221 "method": "bdev_set_options", 00:40:40.221 "params": { 00:40:40.221 "bdev_io_pool_size": 65535, 00:40:40.221 "bdev_io_cache_size": 256, 00:40:40.221 "bdev_auto_examine": true, 00:40:40.221 "iobuf_small_cache_size": 128, 00:40:40.221 "iobuf_large_cache_size": 16 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_raid_set_options", 00:40:40.221 "params": { 00:40:40.221 "process_window_size_kb": 1024, 00:40:40.221 "process_max_bandwidth_mb_sec": 0 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_iscsi_set_options", 00:40:40.221 "params": { 00:40:40.221 "timeout_sec": 30 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_nvme_set_options", 00:40:40.221 "params": { 00:40:40.221 "action_on_timeout": "none", 00:40:40.221 "timeout_us": 0, 00:40:40.221 "timeout_admin_us": 0, 00:40:40.221 "keep_alive_timeout_ms": 10000, 00:40:40.221 "arbitration_burst": 0, 00:40:40.221 "low_priority_weight": 0, 00:40:40.221 "medium_priority_weight": 0, 00:40:40.221 "high_priority_weight": 0, 00:40:40.221 "nvme_adminq_poll_period_us": 10000, 00:40:40.221 "nvme_ioq_poll_period_us": 0, 00:40:40.221 "io_queue_requests": 512, 00:40:40.221 "delay_cmd_submit": true, 00:40:40.221 "transport_retry_count": 4, 00:40:40.221 "bdev_retry_count": 3, 00:40:40.221 "transport_ack_timeout": 0, 00:40:40.221 "ctrlr_loss_timeout_sec": 0, 00:40:40.221 "reconnect_delay_sec": 0, 00:40:40.221 "fast_io_fail_timeout_sec": 0, 00:40:40.221 "disable_auto_failback": false, 00:40:40.221 "generate_uuids": false, 00:40:40.221 "transport_tos": 0, 00:40:40.221 "nvme_error_stat": false, 00:40:40.221 "rdma_srq_size": 0, 00:40:40.221 "io_path_stat": false, 00:40:40.221 "allow_accel_sequence": false, 00:40:40.221 "rdma_max_cq_size": 0, 00:40:40.221 "rdma_cm_event_timeout_ms": 0, 00:40:40.221 "dhchap_digests": [ 00:40:40.221 "sha256", 00:40:40.221 "sha384", 00:40:40.221 "sha512" 00:40:40.221 ], 00:40:40.221 "dhchap_dhgroups": [ 00:40:40.221 "null", 00:40:40.221 "ffdhe2048", 00:40:40.221 "ffdhe3072", 00:40:40.221 "ffdhe4096", 00:40:40.221 "ffdhe6144", 00:40:40.221 "ffdhe8192" 00:40:40.221 ] 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_nvme_attach_controller", 00:40:40.221 "params": { 00:40:40.221 "name": "nvme0", 00:40:40.221 "trtype": "TCP", 00:40:40.221 "adrfam": "IPv4", 00:40:40.221 "traddr": "127.0.0.1", 00:40:40.221 "trsvcid": "4420", 00:40:40.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:40.221 "prchk_reftag": false, 00:40:40.221 "prchk_guard": false, 00:40:40.221 "ctrlr_loss_timeout_sec": 0, 00:40:40.221 "reconnect_delay_sec": 0, 00:40:40.221 "fast_io_fail_timeout_sec": 0, 00:40:40.221 "psk": "key0", 00:40:40.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:40.221 "hdgst": false, 00:40:40.221 "ddgst": false, 00:40:40.221 "multipath": "multipath" 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_nvme_set_hotplug", 00:40:40.221 "params": { 00:40:40.221 "period_us": 100000, 00:40:40.221 "enable": false 00:40:40.221 } 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "method": "bdev_wait_for_examine" 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }, 00:40:40.221 { 00:40:40.221 "subsystem": "nbd", 00:40:40.221 "config": [] 00:40:40.221 } 00:40:40.221 ] 00:40:40.221 }' 00:40:40.221 07:50:08 keyring_file -- keyring/file.sh@115 -- # killprocess 1792735 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1792735 ']' 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1792735 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792735 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792735' 00:40:40.221 killing process with pid 1792735 00:40:40.221 07:50:08 keyring_file -- common/autotest_common.sh@973 -- # kill 1792735 00:40:40.222 Received shutdown signal, test time was about 1.000000 seconds 00:40:40.222 00:40:40.222 Latency(us) 00:40:40.222 [2024-11-26T06:50:08.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:40.222 [2024-11-26T06:50:08.320Z] =================================================================================================================== 00:40:40.222 [2024-11-26T06:50:08.320Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:40.222 07:50:08 keyring_file -- common/autotest_common.sh@978 -- # wait 1792735 00:40:40.482 07:50:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=1794549 00:40:40.482 07:50:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1794549 /var/tmp/bperf.sock 00:40:40.482 07:50:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1794549 ']' 00:40:40.482 07:50:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:40.482 07:50:08 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:40.482 07:50:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.482 07:50:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:40.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:40.482 07:50:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.482 07:50:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:40:40.482 "subsystems": [ 00:40:40.482 { 00:40:40.482 "subsystem": "keyring", 00:40:40.482 "config": [ 00:40:40.482 { 00:40:40.482 "method": "keyring_file_add_key", 00:40:40.482 "params": { 00:40:40.482 "name": "key0", 00:40:40.482 "path": "/tmp/tmp.VQU7u60wk1" 00:40:40.482 } 00:40:40.482 }, 00:40:40.482 { 00:40:40.482 "method": "keyring_file_add_key", 00:40:40.482 "params": { 00:40:40.482 "name": "key1", 00:40:40.482 "path": "/tmp/tmp.I6LgpzDXSu" 00:40:40.482 } 00:40:40.482 } 00:40:40.482 ] 00:40:40.482 }, 00:40:40.482 { 00:40:40.482 "subsystem": "iobuf", 00:40:40.482 "config": [ 00:40:40.482 { 00:40:40.482 "method": "iobuf_set_options", 00:40:40.482 "params": { 00:40:40.482 "small_pool_count": 8192, 00:40:40.482 "large_pool_count": 1024, 00:40:40.482 "small_bufsize": 8192, 00:40:40.482 "large_bufsize": 135168, 00:40:40.482 "enable_numa": false 00:40:40.482 } 00:40:40.482 } 00:40:40.482 ] 00:40:40.482 }, 00:40:40.482 { 00:40:40.482 "subsystem": "sock", 00:40:40.482 "config": [ 00:40:40.482 { 00:40:40.482 "method": "sock_set_default_impl", 00:40:40.482 "params": { 00:40:40.482 "impl_name": "posix" 00:40:40.482 } 00:40:40.482 }, 00:40:40.482 { 00:40:40.482 "method": "sock_impl_set_options", 00:40:40.482 "params": { 00:40:40.482 "impl_name": "ssl", 00:40:40.482 "recv_buf_size": 4096, 00:40:40.482 "send_buf_size": 4096, 00:40:40.482 "enable_recv_pipe": true, 00:40:40.482 "enable_quickack": false, 00:40:40.482 "enable_placement_id": 0, 00:40:40.482 "enable_zerocopy_send_server": true, 00:40:40.482 "enable_zerocopy_send_client": false, 00:40:40.482 "zerocopy_threshold": 0, 00:40:40.482 "tls_version": 0, 00:40:40.482 "enable_ktls": false 00:40:40.482 } 00:40:40.482 }, 00:40:40.482 { 00:40:40.482 "method": "sock_impl_set_options", 00:40:40.483 "params": { 00:40:40.483 "impl_name": "posix", 00:40:40.483 "recv_buf_size": 2097152, 00:40:40.483 "send_buf_size": 2097152, 00:40:40.483 "enable_recv_pipe": true, 00:40:40.483 "enable_quickack": false, 00:40:40.483 "enable_placement_id": 0, 00:40:40.483 "enable_zerocopy_send_server": true, 00:40:40.483 "enable_zerocopy_send_client": false, 00:40:40.483 "zerocopy_threshold": 0, 00:40:40.483 "tls_version": 0, 00:40:40.483 "enable_ktls": false 00:40:40.483 } 00:40:40.483 } 00:40:40.483 ] 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "subsystem": "vmd", 00:40:40.483 "config": [] 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "subsystem": "accel", 00:40:40.483 "config": [ 00:40:40.483 { 00:40:40.483 "method": "accel_set_options", 00:40:40.483 "params": { 00:40:40.483 "small_cache_size": 128, 00:40:40.483 "large_cache_size": 16, 00:40:40.483 "task_count": 2048, 00:40:40.483 "sequence_count": 2048, 00:40:40.483 "buf_count": 2048 00:40:40.483 } 00:40:40.483 } 00:40:40.483 ] 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "subsystem": "bdev", 00:40:40.483 "config": [ 00:40:40.483 { 00:40:40.483 "method": "bdev_set_options", 00:40:40.483 "params": { 00:40:40.483 "bdev_io_pool_size": 65535, 00:40:40.483 "bdev_io_cache_size": 256, 00:40:40.483 "bdev_auto_examine": true, 00:40:40.483 "iobuf_small_cache_size": 128, 00:40:40.483 "iobuf_large_cache_size": 16 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_raid_set_options", 00:40:40.483 "params": { 00:40:40.483 "process_window_size_kb": 1024, 00:40:40.483 "process_max_bandwidth_mb_sec": 0 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_iscsi_set_options", 00:40:40.483 "params": { 00:40:40.483 "timeout_sec": 30 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_nvme_set_options", 00:40:40.483 "params": { 00:40:40.483 "action_on_timeout": "none", 00:40:40.483 "timeout_us": 0, 00:40:40.483 "timeout_admin_us": 0, 00:40:40.483 "keep_alive_timeout_ms": 10000, 00:40:40.483 "arbitration_burst": 0, 00:40:40.483 "low_priority_weight": 0, 00:40:40.483 "medium_priority_weight": 0, 00:40:40.483 "high_priority_weight": 0, 00:40:40.483 "nvme_adminq_poll_period_us": 10000, 00:40:40.483 "nvme_ioq_poll_period_us": 0, 00:40:40.483 "io_queue_requests": 512, 00:40:40.483 "delay_cmd_submit": true, 00:40:40.483 "transport_retry_count": 4, 00:40:40.483 "bdev_retry_count": 3, 00:40:40.483 "transport_ack_timeout": 0, 00:40:40.483 "ctrlr_loss_timeout_sec": 0, 00:40:40.483 "reconnect_delay_sec": 0, 00:40:40.483 "fast_io_fail_timeout_sec": 0, 00:40:40.483 "disable_auto_failback": false, 00:40:40.483 "generate_uuids": false, 00:40:40.483 "transport_tos": 0, 00:40:40.483 "nvme_error_stat": false, 00:40:40.483 "rdma_srq_size": 0, 00:40:40.483 "io_path_stat": false, 00:40:40.483 "allow_accel_sequence": false, 00:40:40.483 "rdma_max_cq_size": 0, 00:40:40.483 "rdma_cm_event_timeout_ms": 0, 00:40:40.483 "dhchap_digests": [ 00:40:40.483 "sha256", 00:40:40.483 "sha384", 00:40:40.483 "sha512" 00:40:40.483 ], 00:40:40.483 "dhchap_dhgroups": [ 00:40:40.483 "null", 00:40:40.483 "ffdhe2048", 00:40:40.483 "ffdhe3072", 00:40:40.483 "ffdhe4096", 00:40:40.483 "ffdhe6144", 00:40:40.483 "ffdhe8192" 00:40:40.483 ] 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_nvme_attach_controller", 00:40:40.483 "params": { 00:40:40.483 "name": "nvme0", 00:40:40.483 "trtype": "TCP", 00:40:40.483 "adrfam": "IPv4", 00:40:40.483 "traddr": "127.0.0.1", 00:40:40.483 "trsvcid": "4420", 00:40:40.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:40.483 "prchk_reftag": false, 00:40:40.483 "prchk_guard": false, 00:40:40.483 "ctrlr_loss_timeout_sec": 0, 00:40:40.483 "reconnect_delay_sec": 0, 00:40:40.483 "fast_io_fail_timeout_sec": 0, 00:40:40.483 "psk": "key0", 00:40:40.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:40.483 "hdgst": false, 00:40:40.483 "ddgst": false, 00:40:40.483 "multipath": "multipath" 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_nvme_set_hotplug", 00:40:40.483 "params": { 00:40:40.483 "period_us": 100000, 00:40:40.483 "enable": false 00:40:40.483 } 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "method": "bdev_wait_for_examine" 00:40:40.483 } 00:40:40.483 ] 00:40:40.483 }, 00:40:40.483 { 00:40:40.483 "subsystem": "nbd", 00:40:40.483 "config": [] 00:40:40.483 } 00:40:40.483 ] 00:40:40.483 }' 00:40:40.483 07:50:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:40.483 [2024-11-26 07:50:08.378815] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:40:40.483 [2024-11-26 07:50:08.378873] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794549 ] 00:40:40.483 [2024-11-26 07:50:08.460268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.483 [2024-11-26 07:50:08.489140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.744 [2024-11-26 07:50:08.631983] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:41.358 07:50:09 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:41.358 07:50:09 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:40:41.358 07:50:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:40:41.358 07:50:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.358 07:50:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:41.358 07:50:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:41.358 07:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.659 07:50:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:40:41.659 07:50:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:41.659 07:50:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:40:41.659 07:50:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:40:41.659 07:50:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:40:41.659 07:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:41.975 07:50:09 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:40:41.975 07:50:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:41.975 07:50:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VQU7u60wk1 /tmp/tmp.I6LgpzDXSu 00:40:41.975 07:50:09 keyring_file -- keyring/file.sh@20 -- # killprocess 1794549 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1794549 ']' 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1794549 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1794549 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1794549' 00:40:41.975 killing process with pid 1794549 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@973 -- # kill 1794549 00:40:41.975 Received shutdown signal, test time was about 1.000000 seconds 00:40:41.975 00:40:41.975 Latency(us) 00:40:41.975 [2024-11-26T06:50:10.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:41.975 [2024-11-26T06:50:10.073Z] =================================================================================================================== 00:40:41.975 [2024-11-26T06:50:10.073Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:41.975 07:50:09 keyring_file -- common/autotest_common.sh@978 -- # wait 1794549 00:40:42.235 07:50:10 keyring_file -- keyring/file.sh@21 -- # killprocess 1792699 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1792699 ']' 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1792699 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@959 -- # uname 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792699 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792699' 00:40:42.235 killing process with pid 1792699 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@973 -- # kill 1792699 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@978 -- # wait 1792699 00:40:42.235 00:40:42.235 real 0m12.158s 00:40:42.235 user 0m29.317s 00:40:42.235 sys 0m2.794s 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.235 07:50:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:42.235 ************************************ 00:40:42.235 END TEST keyring_file 00:40:42.235 ************************************ 00:40:42.495 07:50:10 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:40:42.495 07:50:10 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:42.495 07:50:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:42.495 07:50:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.495 07:50:10 -- common/autotest_common.sh@10 -- # set +x 00:40:42.495 ************************************ 00:40:42.495 START TEST keyring_linux 00:40:42.495 ************************************ 00:40:42.495 07:50:10 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:42.495 Joined session keyring: 1063749202 00:40:42.495 * Looking for test storage... 00:40:42.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:42.495 07:50:10 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:42.495 07:50:10 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:40:42.495 07:50:10 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:42.755 07:50:10 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:42.755 07:50:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:40:42.755 07:50:10 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:42.755 07:50:10 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.755 --rc genhtml_branch_coverage=1 00:40:42.755 --rc genhtml_function_coverage=1 00:40:42.755 --rc genhtml_legend=1 00:40:42.755 --rc geninfo_all_blocks=1 00:40:42.755 --rc geninfo_unexecuted_blocks=1 00:40:42.755 00:40:42.755 ' 00:40:42.755 07:50:10 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.755 --rc genhtml_branch_coverage=1 00:40:42.755 --rc genhtml_function_coverage=1 00:40:42.755 --rc genhtml_legend=1 00:40:42.756 --rc geninfo_all_blocks=1 00:40:42.756 --rc geninfo_unexecuted_blocks=1 00:40:42.756 00:40:42.756 ' 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:42.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.756 --rc genhtml_branch_coverage=1 00:40:42.756 --rc genhtml_function_coverage=1 00:40:42.756 --rc genhtml_legend=1 00:40:42.756 --rc geninfo_all_blocks=1 00:40:42.756 --rc geninfo_unexecuted_blocks=1 00:40:42.756 00:40:42.756 ' 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:42.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.756 --rc genhtml_branch_coverage=1 00:40:42.756 --rc genhtml_function_coverage=1 00:40:42.756 --rc genhtml_legend=1 00:40:42.756 --rc geninfo_all_blocks=1 00:40:42.756 --rc geninfo_unexecuted_blocks=1 00:40:42.756 00:40:42.756 ' 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:42.756 07:50:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:40:42.756 07:50:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:42.756 07:50:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:42.756 07:50:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:42.756 07:50:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.756 07:50:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.756 07:50:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.756 07:50:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:42.756 07:50:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:42.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:42.756 /tmp/:spdk-test:key0 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:40:42.756 07:50:10 keyring_linux -- nvmf/common.sh@733 -- # python - 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:42.756 07:50:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:42.756 /tmp/:spdk-test:key1 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1795030 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1795030 00:40:42.756 07:50:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1795030 ']' 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:42.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:42.756 07:50:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:42.756 [2024-11-26 07:50:10.803119] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:40:42.756 [2024-11-26 07:50:10.803195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795030 ] 00:40:43.016 [2024-11-26 07:50:10.883014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.016 [2024-11-26 07:50:10.914514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:43.588 [2024-11-26 07:50:11.595360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.588 null0 00:40:43.588 [2024-11-26 07:50:11.627414] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:43.588 [2024-11-26 07:50:11.627772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:43.588 877412904 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:43.588 996263630 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1795327 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1795327 /var/tmp/bperf.sock 00:40:43.588 07:50:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1795327 ']' 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:43.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:43.588 07:50:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:43.849 [2024-11-26 07:50:11.707506] Starting SPDK v25.01-pre git sha1 9ebbe7008 / DPDK 24.03.0 initialization... 00:40:43.849 [2024-11-26 07:50:11.707556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795327 ] 00:40:43.849 [2024-11-26 07:50:11.789172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.849 [2024-11-26 07:50:11.819213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:44.418 07:50:12 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:44.418 07:50:12 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:40:44.418 07:50:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:44.418 07:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:44.679 07:50:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:44.679 07:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:44.939 07:50:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:44.939 07:50:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:44.939 [2024-11-26 07:50:13.019218] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:45.200 nvme0n1 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:45.200 07:50:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:45.200 07:50:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:45.462 07:50:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:45.462 07:50:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:45.462 07:50:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@25 -- # sn=877412904 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 877412904 == \8\7\7\4\1\2\9\0\4 ]] 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 877412904 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:45.462 07:50:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:45.724 Running I/O for 1 seconds... 00:40:46.668 24506.00 IOPS, 95.73 MiB/s 00:40:46.668 Latency(us) 00:40:46.668 [2024-11-26T06:50:14.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:46.668 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:46.668 nvme0n1 : 1.01 24506.15 95.73 0.00 0.00 5207.51 2088.96 6498.99 00:40:46.668 [2024-11-26T06:50:14.766Z] =================================================================================================================== 00:40:46.668 [2024-11-26T06:50:14.766Z] Total : 24506.15 95.73 0.00 0.00 5207.51 2088.96 6498.99 00:40:46.668 { 00:40:46.668 "results": [ 00:40:46.668 { 00:40:46.668 "job": "nvme0n1", 00:40:46.668 "core_mask": "0x2", 00:40:46.668 "workload": "randread", 00:40:46.668 "status": "finished", 00:40:46.668 "queue_depth": 128, 00:40:46.668 "io_size": 4096, 00:40:46.668 "runtime": 1.005258, 00:40:46.668 "iops": 24506.1466807526, 00:40:46.668 "mibps": 95.72713547168985, 00:40:46.668 "io_failed": 0, 00:40:46.668 "io_timeout": 0, 00:40:46.668 "avg_latency_us": 5207.514624450308, 00:40:46.668 "min_latency_us": 2088.96, 00:40:46.668 "max_latency_us": 6498.986666666667 00:40:46.668 } 00:40:46.668 ], 00:40:46.668 "core_count": 1 00:40:46.668 } 00:40:46.668 07:50:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:46.668 07:50:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:46.929 07:50:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:46.929 07:50:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:46.929 07:50:14 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:46.930 07:50:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:47.191 [2024-11-26 07:50:15.156794] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:47.191 [2024-11-26 07:50:15.157713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e4f0 (107): Transport endpoint is not connected 00:40:47.191 [2024-11-26 07:50:15.158709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213e4f0 (9): Bad file descriptor 00:40:47.191 [2024-11-26 07:50:15.159711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:40:47.191 [2024-11-26 07:50:15.159718] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:47.191 [2024-11-26 07:50:15.159723] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:40:47.191 [2024-11-26 07:50:15.159730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:40:47.191 request: 00:40:47.191 { 00:40:47.191 "name": "nvme0", 00:40:47.191 "trtype": "tcp", 00:40:47.191 "traddr": "127.0.0.1", 00:40:47.191 "adrfam": "ipv4", 00:40:47.191 "trsvcid": "4420", 00:40:47.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:47.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:47.191 "prchk_reftag": false, 00:40:47.191 "prchk_guard": false, 00:40:47.191 "hdgst": false, 00:40:47.191 "ddgst": false, 00:40:47.191 "psk": ":spdk-test:key1", 00:40:47.191 "allow_unrecognized_csi": false, 00:40:47.191 "method": "bdev_nvme_attach_controller", 00:40:47.191 "req_id": 1 00:40:47.191 } 00:40:47.191 Got JSON-RPC error response 00:40:47.191 response: 00:40:47.191 { 00:40:47.191 "code": -5, 00:40:47.191 "message": "Input/output error" 00:40:47.191 } 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@33 -- # sn=877412904 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 877412904 00:40:47.191 1 links removed 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@33 -- # sn=996263630 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 996263630 00:40:47.191 1 links removed 00:40:47.191 07:50:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1795327 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1795327 ']' 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1795327 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795327 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795327' 00:40:47.191 killing process with pid 1795327 00:40:47.191 07:50:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 1795327 00:40:47.192 Received shutdown signal, test time was about 1.000000 seconds 00:40:47.192 00:40:47.192 Latency(us) 00:40:47.192 [2024-11-26T06:50:15.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:47.192 [2024-11-26T06:50:15.290Z] =================================================================================================================== 00:40:47.192 [2024-11-26T06:50:15.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:47.192 07:50:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 1795327 00:40:47.453 07:50:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1795030 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1795030 ']' 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1795030 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795030 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795030' 00:40:47.453 killing process with pid 1795030 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@973 -- # kill 1795030 00:40:47.453 07:50:15 keyring_linux -- common/autotest_common.sh@978 -- # wait 1795030 00:40:47.714 00:40:47.714 real 0m5.206s 00:40:47.714 user 0m9.688s 00:40:47.714 sys 0m1.444s 00:40:47.714 07:50:15 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.714 07:50:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:47.714 ************************************ 00:40:47.714 END TEST keyring_linux 00:40:47.714 ************************************ 00:40:47.714 07:50:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:40:47.714 07:50:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:47.714 07:50:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:47.714 07:50:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:40:47.714 07:50:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:40:47.714 07:50:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:40:47.714 07:50:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:40:47.714 07:50:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:47.714 07:50:15 -- common/autotest_common.sh@10 -- # set +x 00:40:47.714 07:50:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:40:47.714 07:50:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:40:47.714 07:50:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:40:47.714 07:50:15 -- common/autotest_common.sh@10 -- # set +x 00:40:55.877 INFO: APP EXITING 00:40:55.877 INFO: killing all VMs 00:40:55.877 INFO: killing vhost app 00:40:55.877 WARN: no vhost pid file found 00:40:55.877 INFO: EXIT DONE 00:40:59.179 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:59.179 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:59.179 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:59.440 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:59.440 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:41:03.646 Cleaning 00:41:03.646 Removing: /var/run/dpdk/spdk0/config 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:03.646 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:03.646 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:03.646 Removing: /var/run/dpdk/spdk1/config 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:03.646 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:03.646 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:03.646 Removing: /var/run/dpdk/spdk2/config 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:03.646 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:03.646 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:03.646 Removing: /var/run/dpdk/spdk3/config 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:03.646 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:03.646 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:03.646 Removing: /var/run/dpdk/spdk4/config 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:03.646 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:03.646 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:03.646 Removing: /dev/shm/bdev_svc_trace.1 00:41:03.646 Removing: /dev/shm/nvmf_trace.0 00:41:03.646 Removing: /dev/shm/spdk_tgt_trace.pid1217876 00:41:03.646 Removing: /var/run/dpdk/spdk0 00:41:03.646 Removing: /var/run/dpdk/spdk1 00:41:03.646 Removing: /var/run/dpdk/spdk2 00:41:03.646 Removing: /var/run/dpdk/spdk3 00:41:03.646 Removing: /var/run/dpdk/spdk4 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1216384 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1217876 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1218730 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1219766 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1219878 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1221162 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1221186 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1221642 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1222782 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1223265 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1223650 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1224047 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1224453 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1224858 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1225209 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1225503 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1225768 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1227018 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1230412 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1230702 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1231066 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1231382 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1231752 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1231858 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1232421 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1232473 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1232841 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1233032 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1233213 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1233497 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1233996 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1234296 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1234589 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1239277 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1244592 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1257115 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1257948 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1263038 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1263473 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1268771 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1275854 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1278964 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1291553 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1302552 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1304570 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1305712 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1327146 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1331907 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1388114 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1394596 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1401656 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1409561 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1409563 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1410565 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1411572 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1412577 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1413356 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1413396 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1413694 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1413907 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1414025 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1415399 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1416500 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1417504 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1418186 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1418188 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1418517 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1419960 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1421102 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1431031 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1465545 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1470946 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1472950 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1475097 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1475326 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1475668 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1476003 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1476727 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1479067 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1480158 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1480865 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1483581 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1484294 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1485010 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1490064 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1496797 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1496798 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1496799 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1502039 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1512287 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1517105 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1524374 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1525894 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1527701 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1529253 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1534917 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1540321 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1545250 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1554629 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1554632 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1560151 00:41:03.646 Removing: /var/run/dpdk/spdk_pid1560477 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1560814 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1561158 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1561164 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1566870 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1567395 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1572872 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1576196 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1582611 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1589149 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1599191 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1608016 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1608043 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1631397 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1632171 00:41:03.647 Removing: /var/run/dpdk/spdk_pid1632856 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1633543 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1634602 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1635328 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1636208 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1636973 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1642026 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1642366 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1649516 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1649785 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1656239 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1661837 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1673517 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1674193 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1679240 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1679592 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1684631 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1691499 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1694451 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1706635 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1717877 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1719885 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1720899 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1740508 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1745177 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1748415 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1755951 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1756082 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1762078 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1764740 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1767219 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1768541 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1771064 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1772378 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1782534 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1783064 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1783662 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1786527 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1787181 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1787841 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1792699 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1792735 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1794549 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1795030 00:41:03.907 Removing: /var/run/dpdk/spdk_pid1795327 00:41:03.907 Clean 00:41:04.169 07:50:32 -- common/autotest_common.sh@1453 -- # return 0 00:41:04.169 07:50:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:04.169 07:50:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.169 07:50:32 -- common/autotest_common.sh@10 -- # set +x 00:41:04.169 07:50:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:04.169 07:50:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.169 07:50:32 -- common/autotest_common.sh@10 -- # set +x 00:41:04.169 07:50:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:04.169 07:50:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:41:04.169 07:50:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:41:04.169 07:50:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:04.169 07:50:32 -- spdk/autotest.sh@398 -- # hostname 00:41:04.169 07:50:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:41:04.430 geninfo: WARNING: invalid characters removed from testname! 00:41:31.004 07:50:57 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:32.916 07:51:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:34.299 07:51:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:36.838 07:51:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:38.217 07:51:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:40.126 07:51:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:42.668 07:51:10 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:42.668 07:51:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:41:42.668 07:51:10 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:41:42.668 07:51:10 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:42.668 07:51:10 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:41:42.669 07:51:10 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:42.669 + [[ -n 1130976 ]] 00:41:42.669 + sudo kill 1130976 00:41:42.679 [Pipeline] } 00:41:42.696 [Pipeline] // stage 00:41:42.703 [Pipeline] } 00:41:42.719 [Pipeline] // timeout 00:41:42.726 [Pipeline] } 00:41:42.742 [Pipeline] // catchError 00:41:42.747 [Pipeline] } 00:41:42.762 [Pipeline] // wrap 00:41:42.768 [Pipeline] } 00:41:42.781 [Pipeline] // catchError 00:41:42.791 [Pipeline] stage 00:41:42.794 [Pipeline] { (Epilogue) 00:41:42.807 [Pipeline] catchError 00:41:42.809 [Pipeline] { 00:41:42.822 [Pipeline] echo 00:41:42.823 Cleanup processes 00:41:42.830 [Pipeline] sh 00:41:43.119 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:43.119 1808879 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:43.135 [Pipeline] sh 00:41:43.423 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:43.423 ++ grep -v 'sudo pgrep' 00:41:43.423 ++ awk '{print $1}' 00:41:43.423 + sudo kill -9 00:41:43.423 + true 00:41:43.436 [Pipeline] sh 00:41:43.724 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:55.958 [Pipeline] sh 00:41:56.244 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:56.244 Artifacts sizes are good 00:41:56.258 [Pipeline] archiveArtifacts 00:41:56.265 Archiving artifacts 00:41:56.430 [Pipeline] sh 00:41:56.767 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:56.793 [Pipeline] cleanWs 00:41:56.803 [WS-CLEANUP] Deleting project workspace... 00:41:56.803 [WS-CLEANUP] Deferred wipeout is used... 00:41:56.810 [WS-CLEANUP] done 00:41:56.812 [Pipeline] } 00:41:56.828 [Pipeline] // catchError 00:41:56.838 [Pipeline] sh 00:41:57.123 + logger -p user.info -t JENKINS-CI 00:41:57.134 [Pipeline] } 00:41:57.147 [Pipeline] // stage 00:41:57.153 [Pipeline] } 00:41:57.166 [Pipeline] // node 00:41:57.171 [Pipeline] End of Pipeline 00:41:57.203 Finished: SUCCESS